Sei sulla pagina 1di 15

RLS Algorithm:

Adaptive filter is a filter that changes the value of the co-


efficient of the filter with changing input. Diagram below
shows the constituents of an adaptive filter.
The most important part of this filter is update algorithm.
There are different algorithms that are used to derive these
co-efficient. Traditionally LMS algorithm has been used for
this purpose.

LMS algorithm is aimed at minimizing the mean square error


of the desired output and the input given[1]. Because of this
the convergence rate of LMS is slow. RLS algorithm on the
other hand seeks at reducing the cost function by selecting
the correct filter co-efficient.

In simple words LMS is a Markov process, which doesnt


remember data from the past. On the other hand, RLS
algorithm uses all the data weather its past or present.

1
Figure 1: Adaptive Filter implementation

LMS algorithm is not a resource intensive algorithm. On the


other hand RLS is resource intensive algorithm[2]. It uses
more resources and hence there is a tradeoff on choosing
the algorithm that needs to be used.

In the implementation of RLS algorithm, we will be using 6


steps.

1. Update the input history vector f (n).

2. Compute the filter output using the previous set of filter


co-efficient b(n-1).

() = ()( 1)
3. Compute the error.

() = () ()

2
4. Compute the Kalman gain vector.

() = 1 ()/ ( + () 1( 1)())

5. Update the matrix R-1(n) for the next iteration

() = 1[1 ( 1) ()() 1 ( 1)]

6. Update the filter coefficients for the next iteration

() = ( 1) + ()()

Initialization:

The entire filter co-efficient was initialized with zero.


Moreover the R matrix used in its inverse form is also
initialized with all zero. The inputs vector was then initialized
with the correct inputs.

3
After the initialization, the filter output is decided using the
co-efficient calculated in the previous cycle.

Once the initialization is done, the output of the filter is


subtracted from the desired input to get the error signal. This
error signal will be very high to begin with. But with time we
expect the error signal to go down.

After this , Kalman gain vector is to be calculated .


Using the output of the Kalman gain filter, R-1(n) is
calculated.Next the filter coefficient for the next iteration is
calculated using the value calculated from the previous
input.

The Weighing Factor:


The weighing factor affects the convergence of the filter[3]. It
is also called forget factor. It gives exponentially less weight
to the older error samples.

Algorithm explanation:

4
In general the algorithm to estimate the co-efficient of the
FIR filter aims at minimizing the mean square error[4]. For
example in LMS algorithm, the error is reduced with the help
of formula

() = (( ) ())^2
=0

In RLS algorithm, the standard equation is tweaked by


adding a forget factor in the equation.

() = ( ( ) ())^2
=0

Here the value of is in between 0 and 1 i.e. 0 < < 1.


The forget factor makes sure that the recent elements are
weighed more heavily as compared to the earlier elements.
The output of the FIR filter is calculated by convolution sum
of :

5
1

() = ( )
=0

The above equations are the complete RLS algorithm. When


it is initialized, the filter coefficients equal to zero. Inverse
autocorrelation matrix P (n) equals to a delta value times
identity matrix.

Then, filter information vector Z (n) is calculated for internal


use. Basically, it is the preparation for evaluation of gain
vector. is the forgetting factor. It is very critical for the
algorithm. Its value ranges from 0 to 1. The more close to

6
1, the adaptability of filter better. If is too small, the filter will
be unable to utilize all samples of error signal.

For RLS algorithm working, there are two assumptions. First,


the error will convergent to zero, which means the system
has zero mean. Second, input signal x(n) and desire signal
d(n) are correlated to each other. Also, the error signal is
statistically independent.

As the above equation, the error signal is the difference


between the desired signal and filter output y(n). The error
signal is feedback to RLS adaptive algorithm. Adaptive
algorithm generates new filter coefficients with correction in
order to minimize mean square error. Ideally, after several
iterations, the mean square error will equal to zero.

The filter coefficient could be updated by below equation.

+1 = +

This update equation is the most important formula in the


algorithm. The result value Wn+1 is the filter coefficient at

7
n+1 iteration. It depends on the filter coefficients from
previous iteration and correlation law . is generated
by gain vector and error value as below

= () ()

Inverse autocorrelation matrix in nth iteration could be


updated by

1
() = [( 1) () ()]

Where:

Z(n) = P(n-1) * x(n)

RLS adaptive filter coefficients scheme is transformed from


wiener-hopf equations given by

= 1

is the filter co-efficient of the nth iteration.

R auto correlation matrix of input signal and desired signal.

8
= cross-correlation matrix of input signal and desired
signal.

RLS adaptive filter has faster convergence rate than LMS


adaptive filter. This improvement is the tradeoff from
complex computation. The objective for adaptive filter is to
minimize a cost function. For RLS adaptive filter, the cost
function is summation all the product of forgetting factor and
least square error. The equation is given by:

() = ( ) ()^2
=0

The computation of M taps RLS filter requires approximately


3M2 + 4M number of multiplication cells and some addition
cells. The major computation is to calculate Z(n) and P(n).
Thus the designer should choose to use RLS algorithm for a
real time request system because of its fast convergence
property.

9
In this project, hardware implementation of RLS adaptive
noise canceller is completed and demonstrated on Altera
cyclone FPGA family. The design flow chart is presented in
figure 5. At the beginning of the project is to research and
learn the principle of noise cancellation, the structure of
adaptive filter and RLS algorithm to realize the adaptability.
In order to verify the pros and cons of target algorithm, LMS
algorithm is also studied for simulation.

Then, the reference floating point code is well understood as


the basis for written fix point code. The program is written for
16 bit RLS adaptive filter in MATLAB. The number of taps for
adaptive filter and the number of signal bit width could be
easily modified for post system analysis. The fix point
program is divided into rls_top module, rnd module,
saturation module, fir filter module, convm module, gain
vector update module and update_psub module.

In the rls_top module, it opens a wav file as the music signal,


which is combined with a sine wave signal to be desire
signal. This sine wave signal is regarded as input signal. It
will also save the desire signal and input signal into and inc

10
file as the golden vector. This golden vector will be used for
software and hardware evaluation later. Then, the top
module will call sub functions to compute the parameters
based on equations of RLS algorithm.

The rnd module utilizes fix and round function to recognize


the 16th number after fix point is 1 or not. If it is 1, round
function will output the nearest integer. If it is 0, fix function
will truncate the value directly.

The saturation module takes care of overflow and underflow


situation. If overflow happens, the largest value 1-1/(2^16) is
given to the signal. If underflow occurs, the smallest value -
1+1/(2^16) is assigned to the signal.

For fir filter module, it accomplishes the convolution of two


input signals. The filter coefficients at different time are
varying. So it is one of the input signals to the function. The
other input signal is the sine wave noise.

11
Among the rest modules, the convm module is used for
convert the vector to convolution matrix. Gain vectors are
computed in the gain vector update module. The inverse
autocorrelation matrix is calculated in update_psub module.

After finish writing MATLAB code, the simulation is


performed to verify the functionality of adaptive filter. The
output error signal is converged to zero in a very short time.
The system analysis is to evaluate the system performance
with different number of taps and different number of bit
width. The criteria for this design performance are SNR,
error convergent time and minimum error value.

Fix point C program is converted from floating point program


for software implementation. Instead of using matrix
operation in MATLAB, the values of input signal and desire
signal are stored in array. For loop is commonly used for
calculation of gain vector and inverse autocorrelation P. The
c program read the golden vector from txt files and output
error signal to a txt files. The convergence of error is
observed in the plot.

12
Then, by using similar structure of MATLAB code, Verilog
code in register transfer level is generated for hardware
implementation. After simulation and synthesis the whole
design, the RLS module is instantiated in audio core of
Altera DE1 media computer. The combined design is
simulated and synthesis again to make sure there is no
error. Then, the design could be downloaded to Altera
FPGA.

MATLAB implementation:

In the MATLAB implementation of the RLS algorithm , mainly


three equations were implemented for every input data. One
was calculation of Kalman gain, second was updation of
inverse matrix and third was calculation of co-efficeints for
the filter.

These equations are as follows :

() = 1 ()/ ( + () 1( 1)())

The above equation is for Kalman gain filter.

Second one is for implementation and updation of


inverse matrix i.e.

13
() = 1[1 ( 1) ()() 1 ( 1)]

and the third one is updation of the filter co-efficients which


is obtained using the given equation

() = ( 1) + ()()

We can know that the output is correct if the error


function is dying in nature.

Figure 2 Error function

14
Refernce :

[1] SEN M. KUO AND DENNIS R. MORGAN, Active noise control: a tutorial review,
Proceedings of the IEEE, Jun 1999

[2] J. M. Cioffi and T. Kailath, Fast, recursive-least-squares transversal filters for


adaptive filtering, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, pp. 304
337, Apr.1984.

[3] Weifeng Liu, Jose Principe and Simon Haykin, Kernel Adaptive Filtering: A
Comprehensive Introduction, John Wiley, 2010, ISBN 0-470-44753-2

[4] Syed .A.Hadei and M.loftizad, A Family of Adaptive Filter Algorithms in Noise
Cancellation for Speech Enhancement, International Journal of Computer and Electrical
Engineering, April 2010, Vol. 2, No. 2, pp. 307-315.

15

Potrebbero piacerti anche