Sei sulla pagina 1di 16

Signal Processing 87 (2007) 21972212

Fast RLS Fourier analyzers capable of accommodating


frequency mismatch
$
Yegui Xiao
a,
, Liying Ma
b
, Rabab Kreidieh Ward
c
a
Department of Management and Information Systems, Prefectural University of Hiroshima, 1-1-71 Ujina-Higashi, Minami-Ku,
Hiroshima 734-8558, Japan
b
Department of Applied Computer Science, Tokyo Polytechnic University, Atsugi, Kanagawa 243-0297, Japan
c
Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada V6T 1Z4
Received 30 September 2006; received in revised form 6 February 2007; accepted 2 March 2007
Available online 13 March 2007
Abstract
Adaptive Fourier analyzers are used to estimate the discrete Fourier coefcients (DFC) of sine and cosine terms of noisy
sinusoidal signals whose frequencies are usually assumed known a prior. The recursive least squares (RLS) Fourier
analyzer provides excellent performance, but is computationally very intensive. In this paper, we rst present four fast RLS
(FRLS) algorithms based on the inherent characteristics of the DFC estimation problem. These FRLS algorithms show
approximately the same performance and indicate estimation capabilities that are quite similar to those of the RLS, while
requiring considerably less computational cost. Second, the performance of the proposed FRLS algorithms is analyzed in
detail. Difference equations governing their dynamics as well as closed-form expressions for their steady-state mean square
errors (MSE) are derived and compared with those of the LMS Fourier analyzer. Third, the RLS and four FRLS
algorithms are modied by incorporating an adaptive scheme, to alleviate the inuence of undesirable frequency mismatch
(FM) on their performance. Extensive simulations as well as application to real noise signals are provided to demonstrate
the relative performance capabilities of the RLS and four FRLS algorithms, the validity of analytical ndings, and ability
of the modied RLS and FRLS algorithms to mitigate the inuence of the FM.
r 2007 Elsevier B.V. All rights reserved.
Keywords: Adaptive Fourier analysis; RLS; LMS; Performance analysis; Convergence properties; Frequency mismatch
1. Introduction
Adaptive Fourier analysis offers both efcient
and effective solutions to estimation, enhancement
and reconstruction of sinusoidal signals in noise.
Some real-life application areas of the adaptive
Fourier analysis are digital communications, power
systems, control including active noise/vibration
control, biomedical engineering, pitch detection in
automated transcription, etc., where we are con-
cerned with the analysis of a sinusoidal signal in
additive noise [1,618]. The frequencies of the
sinusoidal signal are arbitrary, and are usually
known or estimated in advance. Moreover, the
signal of interest is nonstationary for most of the
time. The discrete Fourier transform (DFT) and its
ARTICLE IN PRESS
www.elsevier.com/locate/sigpro
0165-1684/$ - see front matter r 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.sigpro.2007.03.003
$
A part of this work was presented at ISCAS2004 [26]. This
work was supported in part by The Satake Research Foundation,
Higashi-Hiroshima, Japan.

Corresponding author. Tel.: +81 82 251 9731;


fax: +81 82 251 9405.
E-mail address: xiao@pu-hiroshima.ac.jp (Y. Xiao).
variants [25] may be considered for the analysis of
the signal. However, there are two major problems
that make them basically awkward: (1) the signal
frequencies are arbitrary and may not be an integer
multiples of the fundamental frequency of the DFT,
and (2) the signal is nonstationary due to its time-
varying nature of amplitudes of frequency compo-
nents, e.g. musical signals, etc., and it is difcult to
nd a window with proper length that ts the degree
of nonstationarity of the signal.
To circumvent these difculties with the DFT-
type algorithms, many adaptive algorithms have
been proposed. Some of them are the Kalman
ltering based techniques, the recursive least squares
(RLS) algorithm, the simplied RLS algorithm, the
LMS-like algorithms, etc., see e.g. [418,2126] and
references therein. The RLS algorithm [24] presents
excellent performance, but its computational re-
quirements are much more intensive compared to
the LMS-type algorithms. In this work, we are
focused on the RLS algorithm. Essentially, the RLS
algorithm is nothing but a direct extension of the
RLS algorithm used in adaptive FIR ltering to the
Fourier analysis problem. However, the inherent
uniqueness of the discrete Fourier coefcients
(DFC) estimation problem is not fully harnessed
to reduce the computational burden involved.
In this work, we rst present four (4) fast RLS
(FRLS) algorithms by using of the unique char-
acteristics of the estimation problem. Extensive
simulations conducted for various scenarios reveal
that the proposed algorithms present approximately
the same convergence rate and steady-state proper-
ties as the RLS. However, their computational
burden is considerably reduced. Many types of
FRLS algorithms have been proposed in the context
of adaptive FIR and IIR ltering by properly
manipulating the input auto-correlation matrix
[30]. In adaptive frequency estimation, the idea of
dealing with frequency components one by one
using cascaded and/or parallel-form notch lters
has been applied [10]. Therefore, it is natural to
attempt to estimate the DFCs of frequency compo-
nents one by one [14,15]. The RLS algorithm for
Fourier analysis may be easily simplied based on
the same idea. The rst FRLS (FRLS-I) algorithm
[26] is a product directly derived from this idea.
Unfortunately, no effort, to the best of our knowl-
edge, has been made to develop algorithms that are
faster than this algorithm. The insights used to
derive the other three fast RLS algorithms are not
difcult to gure out, but we have not found in
literature any similar development based on signal
decomposition or any other theory that leads to the
proposed fast RLS-type algorithms.
Since three of the proposed FRLS algorithms can
be treated as ones with scalar variable step size
parameters, their performance analysis is tractable.
Performance of the proposed FRLS algorithms is
analyzed in detail. Difference equations governing
their dynamics and closed-form expressions for
their steady-state mean square errors (MSE) are
derived and compared with those of the LMS
Fourier analyzer. This analysis enriches our under-
standing of the behaviors of both the proposed
FRLS and the conventional RLS algorithms, since
they all perform similarly. The analytical results
obtained are also useful in predicting performance
of a system where certain FRLS algorithm is
implemented.
The third issue of this work is the compensation
for performance degradation due to the frequency
mismatch (FM). In all the above-mentioned adap-
tive algorithms, including the newly proposed
FRLS algorithms, the signal frequencies are pro-
vided in advance. However, the frequencies of the
signal may show some differences from the ones
given to the analysis algorithms. That is, an FM,
large or small, may exist in real applications. The
existence of FM was rst mentioned by Glover [16].
For example, in automated transcription of electro-
nic piano sounds, the frequencies of each note of a
piano may be slightly different from the ones
specied by the international standard, due to the
variation of product quality [8,22]. In dual-tone
multiple frequencies (DTMF) signaling, a maximum
frequency tolerance or FM of 1.5% is allowed by
the related international standards [1921]. The
frequency drift and magnitude variations of har-
monics in power need to be estimated and/or
compensated in real time for monitoring and
maintaining the power quality [1,9]. In narrowband
active noise control systems, signal frequencies
derived from the speed sensor, i.e., tachometer,
may be slightly different from the true ones of the
primary signal due to the sensor error [27,28]. In
order to enhance the applicability of the adaptive
Fourier analysis algorithms, we have to take care of
the FM in order to compensate for the performance
degeneration. For the LMS-based Fourier analyzer,
we have developed a scheme to x the FM problem
[25], but nothing has been done with the RLS
algorithm. In the third part of this work, we modify
the original and the proposed FRLS algorithms by
ARTICLE IN PRESS
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2198
combining them with a new scheme that is capable
of accommodating the FM. A complex-valued RLS
analyzer similar to our modied FRLS-I (MFRLS-
I) was proposed in [17], which can be directly
converted into a real-valued form. Extensive simu-
lations have revealed that performance of this
analyzer is, on the whole, similar to or slightly
better than that of our MFRLS-I, but it requires
approximately twice as much computation as
needed by our MFRLS-I.
The rest of the paper is organized as follows.
Section 2 introduces the conventional RLS algo-
rithm. Four fast RLS algorithms are given in
Section 3. Performance analysis of these proposed
algorithms is given in Section 4. In Section 5,
modication to the FRLS is introduced to get rid of
the inuence of FM. Extensive simulation results
are provided where ever needed. Section 6 concludes
the paper.
2. The conventional RLS algorithm
The discrete sinusoidal signal to be analyzed is
given by
d(n) =

q
0
i=1
{a
i
cos(o
0;i
n) b
i
sin(o
0;i
n)] v(n), (1)
where q
0
is the number of frequency components the
signal possesses, o
0;i
is the frequency of the ith
component (assumed known in advance), v(n) is an
additive zero-mean white noise with variance s
2
v
.
The purpose of an adaptive Fourier analyzer is to
estimate the DFCs of each frequency component,
on a real time basis. Fig. 1 depicts the conventional
adaptive linear combiner (LC) for Fourier analysis.
The conventional RLS algorithm for the LC is given
by [24]
e(n) = d(n)
^
H
T
(n 1)U(n), (2)
F(n) =
1
l
F(n 1)
F(n 1)U(n)U
T
(n)F(n 1)
l U
T
(n)F(n 1)U(n)
_ _
, (3)
^
H(n) =
^
H(n 1) F(n)U(n)e(n), (4)
where
^
H(n) = [ ^ a
1
(n)
^
b
1
(n) ^ a
q
(n)
^
b
q
(n)]
T
, (5)
U(n) = cos(o
1
n) sin(o
1
n) cos(o
q
n) sin(o
q
n)
_
T
.
(6)
Here o
i
= o
0;i
is assumed unless specied. l is the
forgetting factor c (0; 1) whose value is usually set
close to unity. q indicates the number of frequency
components of interest, which may be different
from the number of frequency components (q
0
)
contained in the signal d(n). In general, it is natural
to assume that q
0
is unknown, but a piece of
information about it can be obtained in advance in
many real applications. In power system monitor-
ing, a reasonable value for q
0
may be 3 or 4, as 2 or
3 harmonics are often considered. In DTMF, a
maximum value for q
0
is 8, since eight frequencies
are involved in the DTMF matrix. q
0
is determined
by the rotational speed if d(n) is generated by a
factory cutting machine. Therefore, arbitrary q
could be selected by the users. However, setting q
to q
0
is a general and practical choice. In this work,
we assume q = q
0
.
This RLS algorithm provides excellent estimation
performance and outperforms the LMS-type algo-
rithms such as the LMS [49,22], the p-power
algorithm [23], the lter-bank based sliding algo-
rithms [14,15] and so on. However, the number of
multiplications required by this RLS algorithm for
one iteration is approximately 10q
2
7q 2, which
increases very fast as q gets larger and may become
very tough for the hardware implementation.
Therefore, efforts make sense in pursuing algo-
rithms that retain the performance merit of the RLS
algorithm and at the same time possess improved
computational efciency. Next section is motivated
by this point of view.
ARTICLE IN PRESS

d(n)
y(n)
e(n)
+
^
b
1
(n)
b
q
(n)
^
^
a
q
(n)
cos(
1
n)
sin(
1
n)
cos(
q
n)
sin(
q
n)
+
+
^
a
1
(n)
Fig. 1. Block diagram for the adaptive Fourier analyzer.
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2199
3. Fast RLS algorithms
Now, let us rewrite the observed signal d(n) as
d(n) = {a
i
cos(o
0;i
n) b
i
sin(o
0;i
n)]
=b
1
(n)

q
j=1; jai
{a
j
cos(o
0;j
n) b
j
sin(o
0;j
n)]
=b
2
(n)
v(n). (7)
Obviously, b
1
(n) and b
2
(n) contain different fre-
quency components. If one regards sine and cosine
waves as pseudo-random signals as in [29, pp.
2224], [30, pp. 106107], and [27, pp. 117122], one
readily yields E[b
1
(n)b
2
(n)] = 0. Now one may
realize that the DFCs estimation can be performed
for each frequency component, by regarding the
other components plus the noise v(n) as an additive
noise. Based on similar idea, it is easy to see that the
DFCs can even be estimated one by one. These
insights naturally lead to the following two fast RLS
algorithms [26].
3.1. Fast RLS algorithm I (FRLS-I)
e(n) = d(n)

q
i=1
^
H
T
i
(n 1)U
i
(n), (8)
F
j
(n) =
1
l
j
F
j
(n 1)
_

F
j
(n 1)U
j
(n)U
T
j
(n)F
j
(n 1)
l
j
U
T
j
(n)F
j
(n 1)U
j
(n)
_
, (9)
^
H
j
(n) =
^
H
j
(n 1) F
j
(n)U
j
(n)e(n), (10)
^
H
j
(n) = [ ^ a
j
(n)
^
b
j
(n)]
T
, (11)
U
j
(n) = [cos(o
j
n) sin(o
j
n)]
T
,
j = 1; 2; . . . ; q (12)
l
j
is a forgetting factor corresponding to the ith
frequency. The multiplications needed for one
iteration in the above algorithm are approximately
19q, which is proportional to q (the number of
frequencies), rather than q
2
. This simplication is
straightforward and not new, because the idea of
dealing with frequency components one by one has
been successfully applied in frequency estimation
based on adaptive FIR or IIR notch ltering. The
reduction of multiplications is due to the down-
sizing of the auto-correlation matrix which shrinks
from 2q 2q to 2 2. The four elements of the
correlation or gain matrix F
j
(n) for different signal
frequency or j are given in Fig. 2. Clearly, the
nondiagonal elements are insignicant and may be
neglected, resulting in the following fast algorithm.
This insight is not difcult to reach, but unfortu-
nately has not been uncovered. In Appendix, it is
proved that the nondiagonal elements converge to
zeros in the mean sense, and all the diagonal ones
approach the same constant characterized only by
the forgetting factor.
3.2. Fast RLS algorithm II (FRLS-II)
e(n) = d(n)

q
i=1
{ ^ a
i
(n 1) cos(o
i
n)

^
b
i
(n 1) sin(o
i
n)], (13)
^ a
j
(n) = ^ a
j
(n 1) F
c;j
(n) cos(o
j
n)e(n), (14)
^
b
j
(n) =
^
b
j
(n 1) F
s;j
(n) sin(o
j
n)e(n), (15)
F
c;j
(n) =
1
l
j
F
c;j
(n 1)
_

F
2
c;j
(n 1) cos
2
(o
j
n)
l
j
F
c;j
(n 1) cos
2
(o
j
n)
_
, (16)
F
s;j
(n) =
1
l
j
F
s;j
(n 1)
_

F
2
s;j
(n 1) sin
2
(o
j
n)
l
j
F
s;j
(n 1) sin
2
(o
j
n)
_
. (17)
The number of multiplications needed for one
iteration is approximately 16q, which is less than
that of the FRLS-I. Fig. 3 shows F
c;j
(n) and F
s;j
(n)
for two different l (0:975; 0:995) and different signal
frequencies. Obviously, F
c;j
(n) and F
s;j
(n) is very
close to each other even for different signal
frequencies as long as the forgetting factor is the
same. This implies that we may calculate F
c;j
(n) and
use it to replace F
s;j
(n) to reduce the number of
multiplications. Further, we may just calculate one
F
c;j
(n) for some frequency o
k
c {o
j
]
q
j=1
, and use it in
place of all the F
c;j
(n) and F
s;j
(n), when a uniform
forgetting factor is used. These implications readily
lead to the following two fast algorithms.
ARTICLE IN PRESS
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2200
ARTICLE IN PRESS
0 20 40 60 80 100 120 140 160 180 200
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
number of iteration n
G
a
i
n

F

(
1
,
1
)
0 20 40 60 80 100 120 140 160 180 200
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
number of iteration n
G
a
i
n

F

(
1
,
2
)

o
r

F

(
2
,
1
)
0 20 40 60 80 100 120 140 160 180 200
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
number of iteration n
G
a
i
n

F

(
2
,
2
)
Fig. 2. Elements of gain matrices for different signal frequencies (o
j
= 0:05p0:5p with an interval 0:05p, l
j
= 0:985 for all j). (a) F
1;1
(n).
(b) F
1;2
(n) or F
2;1
(n). (c) F
2;2
(n).
0 20 40 60 80 100 120 140 160 180 200
0
0.05
0.1
0.15
0.2
0.25
number of iteration n
G
a
i
n

F
c
,
j

(
n
)
lambda=0.975
lambda=0.995
0 20 40 60 80 100 120 140 160 180 200
0
0.05
0.1
0.15
0.2
0.25
number of iteration n
G
a
i
n

F
s
,
j

(
n
)
lambda=0.975
lambda=0.995
Fig. 3. F
c;j
(n) and F
s;j
(n) for two l values (0:975; 0:995) and different signal frequencies (o
j
= 0:05p0:5p with an interval 0:05p).
(a) F
c;j
(n). (b) F
s;j
(n).
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2201
3.3. Fast RLS algorithm III (FRLS-III)
Letting F
j
(n) = F
c;j
(n) = F
s;j
(n), we have
^ a
j
(n) = ^ a
j
(n 1) F
j
(n) cos(o
j
n)e(n), (18)
^
b
j
(n) =
^
b
j
(n 1) F
j
(n) sin(o
j
n)e(n), (19)
F
j
(n) =
1
l
j
F
j
(n 1)
F
2
j
(n 1) cos
2
(o
j
n)
l
j
F
j
(n 1) cos
2
(o
j
n)
_ _
.
(20)
3.4. Fast RLS algorithm IV (FRLS-IV)
When uniform forgetting factor is used, FRLS-III
reduces to
^ a
j
(n) = ^ a
j
(n 1) F(n) cos(o
j
n)e(n), (21)
^
b
j
(n) =
^
b
j
(n 1) F(n) sin(o
j
n)e(n), (22)
F(n) =
1
l
F(n 1)
F
2
(n 1) cos
2
(o
k
n)
l F(n 1) cos
2
(o
k
n)
_ _
. (23)
It should be noted that the selection of o
k
does not
affect the performance of the algorithm signicantly
according to our extensive simulations. This also
agrees with the observation obtained from Fig. 3.
Taking the average of the signal frequencies as o
k
seems to be a reasonable choice.
Obviously, the computation of the above two
algorithms is further decreased. The number of
multiplications per iteration for the RLS and the
above proposed FRLS algorithms is summarized in
Table 1. The LMS algorithm is also included in the
table for reference. Apparently, the proposed
algorithms enjoy absolute computational advan-
tages over the RLS. As q gets larger, the efciency
of the FRLS algorithms is pronounced.
3.5. Simulation-based comparisons
Extensive simulations have been conducted to
compare the RLS and the proposed fast RLS
algorithms. Some typical results are provided here
to demonstrate the relative performance capabilities
of these algorithms.
In Fig. 4, the RLS is compared with the FRLS-I.
The DFCs MSEs are compared. On the whole,
the FRLS-I indicates convergence rates and
steady-state values very similar to those of the
RLS. This implies that the proposed FRLS-I can
replace the RLS in real applications because of
its compatible performance and computational
advantage.
Next, the four proposed FRLS algorithms are
compared. Fig. 5 shows the comparisons among the
four fast RLS algorithms proposed. They present
very similar performance, with the convergence
rates similar and steady-state values almost iden-
tical. Therefore, one can conclude that (i) the four
FRLS algorithms proposed in this work may all
replace the RLS, and (ii) they perform similarly,
with the FRLS-IV having the most signicant
computational merit.
ARTICLE IN PRESS
0 200 400 600 800 1000 1200
-35
-30
-25
-20
-15
-10
-5
0
5
number of iteration n
M
S
E
s

o
f

D
F
C
s

[
d
B
]
RLS
FRLS-I
Fig. 4. Comparisons between RLS and FRLS-I (E[(a
1
^ a
1
(n))
2
],
signal frequency = {0:10p, 0:20p, 0:30p], a
1
= 2:0, b
1
= 1:0,
a
2
= 1:0, b
2
= 0:5, a
3
= 0:25, b
3
= 0:10; RLS: l = 0:995;
FRLS-I: l
1
= l
2
= l
3
= 0:995; s
v
= 0:32, 100 runs).
Table 1
Number of multiplications for the RLS-type and LMS algorithms
Algorithm RLS FRLS-I FRLS-II FRLS-III FRLS-IV LMS
multiplications 10q
2
7q 2 19q 16q 11q 6q 5 6q
(q = 1) 19 19 16 11 11 6
(q = 3) 113 57 48 33 23 18
(q = 10) 1072 190 160 110 65 60
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2202
3.6. Application to real noise signals
The proposed FRLS, the LMS and the RLS
algorithms are applied to real noise signals gener-
ated by a large-scale factory cutting machine. The
signal frequencies are determined in advance based
on the following estimated amplitude.
^
A(o) =
2
N
0

N
0
1
n=0
d(n) cos(on)
_ _
2
_
_
_

2
N
0

N
0
1
n=0
d(n) sin(on)
_ _
2
_
_
_
1=2
, (24)
where o is increased from 0 to p with an arbitrarily
small interval such that enough frequency resolu-
tion can be obtained, N
0
is a properly selected
integer for the analysis. The o that makes
^
A(o)
spiky is picked up as the signal frequency to be
included in the adaptive Fourier analysis that
follows. Spikes that are very closely spaced are
unied as a single frequency by simple averaging.
Fig. 6 shows a typical real noise signal and its
frequency analysis results, with a rotational speed
of 1600 rpm. As a result, 10 (q = 10) frequencies
were detected and fed to the adaptive Fourier
analyzers. Fig. 7 presents the error signals produced
by these algorithms, with the step size parameters
of the LMS selected as m
i
= 2(1 l
i
) (i = 1; . . . ; q)
such that all the error signals have approxi-
mately the same power at steady state (see next
Section for the relationship between the forget-
ting factors and the step size parameters). It is
seen that FRLS-IV and RLS have similar perfor-
mance and both converge much faster than the
LMS does.
4. Performance analysis of fast RLS algorithms
In this section, we take the FRLS-II as an
example and analyze its performance in detail. The
other proposed FRLS algorithms, the FRLS-III
ARTICLE IN PRESS
0 100 200 300 400 500 600 700 800 900 1000
-35
-30
-25
-20
-15
-10
-5
0
5
number of iteration n
M
S
E
s

o
f

D
F
C
s

[
d
B
]
FRLS-I
FRLS-II
FRLS-III
FRLS-IV
Fig. 5. Comparisons among the four proposed FRLS algor-
ithms (E[(b
1

^
b
1
(n))
2
], signal frequency = {0:10p; 0:20p; 0:30p],
a
1
= 2:0, b
1
= 1:0, a
2
= 1:0, b
2
= 0:5, a
3
= 0:25, b
3
= 0:10,
l
1
= l
2
= l
3
= 0:985, s
v
= 0:32, 100 runs).
200 400 600 800 100012001400160018002000
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
N
o
i
s
e

s
i
g
n
a
l
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.01
0.02
0.03
0.04
0.05
0.06
Frequency (*pi)
A
m
p
l
i
t
u
d
e
number of iteration n
Fig. 6. A typical real noise signal and estimated amplitude of its frequency components (N
0
= 5000). (a) A real noise signal. (b) Estimated
amplitude.
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2203
and FRLS-IV, are special forms of FRLS-II. The
estimation errors of the DFCs are dened as
e
a
i
(n) = a
i
^ a
i
(n), (25)
e
b
i
(n) = b
i

^
b
i
(n). (26)
Putting these denitions into (13), one gets
(n) =

q
i=1
{(a
i
^ a
i
(n 1)) cos(o
i
n) (b
i

^
b
i
(n 1))
sin(o
i
n)] v(n)
=

q
i=1
{e
a
i
(n 1) cos(o
i
n)
e
b
i
(n 1) sin(o
i
n)] v(n). (27)
4.1. Convergence in the mean sense
Using the DFC estimation error denitions and
the above error signal expression in (14) yields
E[e
a
i
(n)] = E[e
a
i
(n 1)] F
c;i
(n)E[e(n) cos(o
i
n)]
= 1
1
2
F
c;i
(n)
_
E[e
a
i
(n 1)], (28)
where E[] indicates the ensemble averaging. In the
calculations of the above difference equation, the
cosine and sine waves are treated as pseudo-random
signals that have zero means and variances of 0.5,
see [29, pp. 2224], [30, pp. 106107], and [27, pp.
117122]. In the same way, the difference equation
ARTICLE IN PRESS
200 400 600 800 100012001400160018002000
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
E
r
r
o
r

s
i
g
n
a
l

(
L
M
S
)
200 400 600 800 100012001400160018002000
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
E
r
r
o
r

s
i
g
n
a
l

(
R
L
S
)
200 400 600 800 100012001400160018002000
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
E
r
r
o
r

s
i
g
n
a
l

(
F
R
L
S
-
V
)
number of iteration n number of iteration n
number of iteration n
Fig. 7. Estimation error signals (l
1
= = l
10
= 0:996, m
i
= 2(1 l
i
); i = 1; 2; . . . ; 10). (a) Error signal by LMS. (b) Error signal by RLS.
(c) Error signal by FRLS-IV.
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2204
for the sine DFC may be obtained as
E[e
b
i
(n)] = E[e
b
i
(n 1)] F
s;i
(n)E[e(n) sin(o
i
n)]
= [1
1
2
F
s;i
(n)]E[e
b
i
(n 1)]. (29)
From these difference equations, one may conclude:
(C1-1) The convergences of the estimation errors
are independent of each other in the mean
sense.
(C1-2) As long as 0oF
c;i
(n)o4 and 0oF
s;i
(n)o4,
the convergence of the algorithm in the
mean will be guaranteed, and the estimation
errors will vanish at steady state. Therefore,
the initial values of F
c;j
(n) and F
s;j
(n) need,
at least, to be set below 4.
(C1-3) If F
c;i
(n) = F
s;i
(n) = m
i
, the difference equa-
tions reduce to those derived for the LMS
Fourier analyzer [22].
4.2. Convergence in the mean square sense
Squaring both sides of (14) and taking ensemble
average, one has
E[e
2
a
i
(n)] = E[e
2
a
i
(n 1)] 2F
c;i
(n)
E[e
a
i
(n 1)e(n) cos(o
i
n)]
I
i;1
(n)
F
2
c;i
(n)E[e
2
(n) cos
2
(o
i
n)]
I
i;2
(n)
, (30)
where
I
i;1
(n) = E[e
a
i
(n 1)e(n) cos(o
i
n)]
=
1
2
E[e
2
a
i
(n 1)] (31)
and
I
i;2
(n) = E[e
2
(n) cos
2
(o
i
n)]
=
1
2
s
2
v

3
8
E[e
2
a
i
(n 1)]
1
4

q
j=1;jai
E[e
2
a
j
(n 1)]

1
8
E[
2
b
i
(n 1)]
1
4

q
j=1;jai
E[e
2
b
j
(n 1)]

1
8

q
j=1

q
j
2
=1;j
2
aj
1
E[e
a
j
1
(n 1)]E[e
a
j
2
(n 1)]
{d(o
j
1
o
j
2
2o
i
) d([o
j
1
o
j
2
[ 2o
i
)]

1
8

q
j=1

q
j
2
=1;j
2
aj
1
E[e
b
j
1
(n 1)]E[e
b
j
2
(n 1)]
{d(o
j
1
o
j
2
2o
i
)
d([o
j
1
o
j
2
[ 2o
i
)] (32)
can be obtained after a lengthy process of calcula-
tions. d() is a Dirac delta function. Using above
results in (30) results in the following difference
equation for cosine DFC MSE
E[e
2
a
i
(n)] = 1 F
c;i
(n)
3
8
F
2
c;i
(n)
_ _
E[e
2
a
i
(n 1)]

1
8
F
2
c;i
(n)E[e
2
b
i
(n 1)]

1
4
F
2
c;i
(n)

q
j=1;jai
{E[e
2
a
j
(n 1)]
E[e
2
b
j
(n 1)]]
1
8
F
2
c;i
(n)

q
j
1
=1

q
j
2
=1;j
2
aj
1
E[e
a
j
1
(n 1)]E[e
a
j
2
(n 1)]
{d(o
j
1
o
j
2
2o
i
) d([o
j
1
o
j
2
[
2o
i
)]
1
8
F
2
c;i
(n)

q
j
1
=1

q
j
2
=1;j
2
aj
1
E[e
b
j
1
(n 1)]E[e
b
j
2
(n 1)]
{d(o
j
1
o
j
2
2o
i
) d([o
j
1
o
j
2
[
2o
i
)]
1
2
F
2
c;i
(n)s
2
v
. (33)
In a similar way, a difference equation for sine DFC
MSE can be derived as follows:
E[e
2
b
i
(n)] = 1 F
s;i
(n)
3
8
F
2
s;i
(n)
_ _
E[e
2
b
i
(n 1)]

1
8
F
2
s;i
(n)E[e
2
a
i
(n 1)]

1
4
F
2
s;i
(n)

q
j=1;jai
{E[e
2
a
j
(n 1)]
E[e
2
b
j
(n 1)]]
1
8
F
2
s;i
(n)

q
j
1
=1

q
j
2
=1;j
2
aj
1
E[e
a
j
1
(n 1)]E[e
a
j
2
(n 1)]
{d(o
j
1
o
j
2
2o
i
) d([o
j
1
o
j
2
[
2o
i
)]
1
8
F
2
s;i
(n)

q
j
1
=1

q
j
2
=1;j
2
aj
1
E[e
b
j
1
(n 1)]E[e
b
j
2
(n 1)]
{d(o
j
1
o
j
2
2o
i
) d([o
j
1
o
j
2
[
2o
i
)]
1
2
F
2
s;i
(n)s
2
v
. (34)
ARTICLE IN PRESS
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2205
Now, we have the following comments regarding
the difference equations for the convergence in the
mean square sense:
(C2-1) Unlike the convergence in the mean sense,
the MSEs of DFCs of certain frequency
component are related to not only MSEs of
all the other frequency components but also
all second-order cross-terms of DFC esti-
mation errors.
(C2-2) According to our extensive simulations, the
cross-terms are usually much smaller than
the MSE terms. Therefore, the difference
equations are approximately of linear nature.
(C2-3) If F
c;i
(n) = F
s;i
(n) = m
i
and the cross-terms
ignored, the difference equations will reduce
to those derived in [22].
(C2-4) Numerically solving the difference equa-
tions derived for the convergence in both
mean and mean square senses simulta-
neously will reveal the dynamics and stea-
dy-state properties of the algorithm.
4.3. Steady-state properties
At steady state of the algorithm, as seen in Fig. 3,
the gain factors F
c;i
(n) and F
s;i
(n) converge to a
small positive constant that seems to be determined
by the forgetting factor l
i
, as long as l
i
is close to
unity. Let
F
c;i
(n)[
no
= F
c;i
(n 1)[
no
= F
c;i
(o),
F
s;i
(n)[
no
= F
s;i
(n 1)[
no
= F
s;i
(o).
Using the above in (16) and taking ensemble
average, one readily yields
(l
i
1)F
c;i
(o)
= F
2
c;i
(o)
1
p
_
p
0
cos
2
x
l F
c;i
cos
2
x
dx
_ _
which has a meaningful solution
F
c;i
(o) =
1 l
2
i
l
i
- 2(1 l
i
). (35)
Similarly, from (17) one may reach
F
s;i
(o) = F
c;i
(o). (36)
Now, using the above relation and
E[e
a
i
(n)][
no
= E[e
a
i
(n 1)][
no
= 0,
E[e
b
i
(n)][
no
= E[e
b
i
(n 1)][
no
= 0,
E[e
2
a
i
(n)][
no
= E[e
2
a
i
(n 1)][
no
= E[e
2
a
i
(o)],
E[e
2
b
i
(n)][
no
= E[e
2
b
i
(n 1)][
no
= E[e
2
b
i
(o)]
(33) and (34) reduce to
F
c;i
(o)
3
8
F
2
c;i
(o)
_ _
E[e
2
a
i
(o)]
=
1
8
F
2
c;i
(o)E[e
2
b
i
(o)]
1
2
F
2
c;i
(o)s
2
v

1
4
F
2
c;i
(o)

q
j=1;jai
(E[e
2
a
j
(o)] E[e
2
b
j
(o)]), (37)
F
c;i
(o)
3
8
F
2
c;i
(o)
_ _
E[e
2
b
i
(o)]
=
1
8
F
2
c;i
(o)E[e
2
a
i
(o)]
1
2
F
2
c;i
(o)s
2
v

1
4
F
2
c;i
(o)

q
j=1;jai
(E[e
2
a
j
(o)] E[e
2
b
j
(o)]). (38)
Subtracting both sides of (38) from (37) leads to
E[e
2
a
i
(o)] = E[e
2
b
i
(o)]. (39)
Putting this relation back to (38), one gets
E[e
2
a
i
(o)] =
1
2
F
c;i
(o)

q
j=1
E[e
2
a
j
(o)]

1
2
F
c;i
(o)s
2
v
. (40)
Summing up above equations for all i yields

q
i=1
E[e
2
a
i
(o)] =
s
2
v

q
i=1
F
c;i
(o)
2

q
i=1
F
c;i
(o)
. (41)
Using the above result in (40), one ultimately
obtains the steady-state MSE as
E[e
2
a
i
(o)] =
F
c;i
(o)s
2
v
2

q
j=1
F
c;j
(o)
=
(1 l
2
i
)s
2
v
l
i
2

q
j=1
1l
2
j
l
j
_ _
-
(1 l
i
)s
2
v
l
i

q
j=1;jai
(1 l
j
)
. (42)
Clearly, we see that:
(C3-1) The larger the forgetting factor, the smaller
the steady-state MSE tends to be, but the
slower the convergence will become.
(C3-2) The MSE of the ith DFC has little to do with
the forgetting factors for other DFCs, as
long as the forgetting factors are all set close
to unity.
ARTICLE IN PRESS
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2206
(C3-3) The MSE is proportional to the noise
variance, just the same as the LMS algo-
rithm behaves.
(C3-4) Selection of forgetting factors must make
l
i
40; 1 l
2
i
40;

q
j=1
1 l
2
j
l
j
o2
or
1 l
i
40; l
i
4

q
j=1;jai
(1 l
j
)
satised. These relations serve as a coarse
stability bound for the algorithm. If a
uniform forgetting factor l is used, the
stability condition reduces to (q 1)=qo
lo1. This condition implies that l needs to
be set closer to unity for the sake of stability
when the number of frequency components
becomes larger.
4.4. Comparison with the LMS algorithm
Let us reproduce the MSE of the LMS algorithm
[22]
E[e
2
a
i
(o)] =
m
i
s
2
v
2

q
j=1
m
j
. (43)
Comparing this expression with (42), we see that if
m
i
= F
c;i
(o) =
1 l
2
i
l
i
- 2(1 l
i
) (44)
the FRLS and the LMS algorithms will produce
exactly the same amount of MSE at steady state. If
this relation is satised, both algorithms may be
compared fairly in terms of their convergence
speeds.
4.5. Simulations
Extensive simulations are conducted to demon-
strate the validity of the analytical results. First, the
dynamics of the FRLS-II and LMS are compared
analytically and by simulations. Some typical results
are given in Fig. 8, where the convergence in the
mean and mean square senses of FRLS-II and LMS
are compared. Obviously, the difference equations
derived provide a very good agreement with the
simulated dynamics, and the FRLS-II converges
much faster than the LMS. Second, steady-state
MSE expression is compared with simulations in
Figs. 9 and 10 with respect to forgetting factor and
SNR, respectively. Very close t between theory and
simulations is observed.
5. Modied fast RLS algorithms in the presence of
frequency mismatch
In the conventional RLS and the proposed FRLS
algorithms, the signal frequencies are provided in
advance. However, signal frequencies may indicate
some differences from their given values. That is, an
FM, large or small, usually exists in real-life
applications. For the LMS-based Fourier analyzer,
we have developed a scheme to x the FM problem
ARTICLE IN PRESS
0 100 200 300 400 500 600 700 800
-0.5
0
0.5
1
1.5
2
2.5
number of iteration n
E
s
t
i
m
a
t
i
o
n

e
r
r
o
r
theory
simulation
LMS
FRLSII
0 100 200 300 400 500 600 700 800
-35
-30
-25
-20
-15
-10
-5
0
5
10
number of iteration n
E
s
t
i
m
a
t
i
o
n

M
S
E

[
d
B
]
theory
simulation
LMS
FRLSII
Fig. 8. Comparisons between theory and simulations for the
FRLS-II and LMS algorithms (signal frequency =
{0:10p; 0:20p; 0:30p] a
1
= 2:0, b
1
= 1:0, a
2
= 1:0, b
2
= 0:5,
a
3
= 0:25, b
3
= 0:10; RLS: l = 0:985; FRLS-II:
l
1
= l
2
= l
3
= 0:985; LMS: m
i
= 2(1 l
i
), i = 1; 2; 3,
s
v
= 0:32, 100 runs). (a) E[e
a
1
(n)]. (b) E[e
2
a
1
(n)].
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2207
[25]. Here, we extend this scheme to the RLS
and the FRLS algorithms to make them robust to
the FM.
5.1. Modied FRLS algorithms
In Fig. 11, the error signals produced by the
FRLS-II algorithm with and without FM are
presented. Obviously, An FM (dened by Do
i
=
o
0;i
o
i
) of two percent ([Do
i
[=o
i
100 = 2%)
completely spoiled the performance of the algo-
rithm. The same is true for the RLS and other fast
RLS algorithms. Therefore, if an FM exists and it is
not tiny, the performance of the RLS and the
proposed FRLS algorithms will degrade signi-
cantly. This calls for an adaptive scheme to
accommodate the FM. We propose to use a
structure given in Fig. 12 to deal with the FM.
Based on the fact that a sinusoid may be modeled as
a second-order AR process without random input
[25], the input elements of the linear combiner,
x
a
i
(n) = cos(o
i
n) and x
b
i
(n) = sin(o
i
n), can be ex-
pressed as
x
a
i
(n) = c
i
(n 1)x
a
i
(n 1) x
a
i
(n 2),
nX2; x
a
i
(0) = 1; x
a
i
(1) = cos o
i
, (45)
x
b
i
(n) = c
i
(n 1)x
b
i
(n 1) x
b
i
(n 2),
nX2; x
b
i
(0) = 0; x
b
i
(1) = sin o
i
, (46)
c
i
(n) = 2 cos(o
i
(n)); o
i
(0) = o
i
(1) = o
i
. (47)
There are two ways to update the frequency-related
coefcients {c
i
(n)]
q
i=1
. One is to update all of them
together. The other is to update them one by one or
sequentially. Simulations show that the former
indicates slightly faster convergence, but the latter
requires much less computation. Here, we choose
the second way to update {c
i
(n)]
q
i=1
. The RLS-based
recursion is as follows:
c
i
(n) = c
i
(n 1) G
i
(n)a
i
(n)e(n), (48)
ARTICLE IN PRESS
0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
forgetting factor
S
t
e
a
d
y

s
t
a
t
e

M
S
E
O
simulation
theory
Fig. 9. Comparisons between theory and simulations for
the FRLS-II algorithms (E[e
2
a
1
(o)], signal frequency =
{0:10p; 0:20p; 0:30p]; a
1
= 2:0, b
1
= 1:0, a
2
= 1:0, b
2
= 0:5,
a
3
= 0:25, b
3
= 0:10; l
1
= l
2
= l
3
= l, s
v
= 0:32; 40 runs).
-5 0 5 10 15 20 25 30
-45
-40
-35
-30
-25
-20
-15
-10
-5
SNR [dB]
S
t
e
a
d
y

s
t
a
t
e

M
S
E

[
d
B
]
O
simulation
theory
Fig. 10. Comparisons between theory and simulations for
the FRLS-II algorithm (E[e
2
a
1
(o)], signal frequency =
{0:10p; 0:20p; 0:30p]; a
1
= 2:0, b
1
= 1:0, a
2
= 1:0, b
2
= 0:5,
a
3
= 0:25, b
3
= 0:10; l
1
= 0:97, l
2
= 0:98, l
3
= 0:99,
SNR = 10 log
10
(1=s
2
v
), 40 runs).
0 100 200 300 400 500 600 700 800 900 1000
-15
-10
-5
0
5
10
15
number of iteration n
E
r
r
o
r

s
i
g
n
a
l
s

[
d
B
]
FRLS-II with FM
FRLS-II without FM
Fig. 11. Error signals by FRLS-II with and without FM
(true signal frequency = {0:102p; 0:194p; 0:306p], user-specied
frequency = {0:1p; 0:2p; 0:3p], l
1
= l
2
= l
3
= 0:99, s
v
= 0:32,
other conditions the same as Fig. 10, 100 runs).
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2208
where
G
i
(n) =
1
l
c;i
G
i
(n 1)
G
2
i
(n 1)a
2
i
(n)
l
c;i
G
i
(n 1)a
2
i
(n)
_ _
,
(49)
a
i
(n) = ^ a
i
(n)x
a
i
(n 1)
^
b
i
(n)x
b
i
(n 1) (50)
and the gradient-based recursion is given by
c
i
(n) = c
i
(n 1) m
c;i
a
i
(n)e(n); m
c;i
40, (51)
where l
c;i
is a forgetting factor and m
c;i
is a step size
parameter. Simulations have shown that both
recursions work well. The RLS-based recursion
normally converges slightly faster than the gradient-
based one, but its parameter adjustment is quite
delicate. The latter is recommended in real-world
applications.
In the sequel, replacing cos(o
i
n) and sin(o
i
n) in
RLS and FRLS algorithms by x
a
i
(n) and x
b
i
(n),
respectively, and incorporating one of the above
recursions, one may obtain a modied RLS
(MRLS) and four modied FRLS (MFRLS) algo-
rithms [26]. Simulations have shown that all the
modied RLS-type analyzers can beautifully com-
pensate for the performance degeneration due to the
FM. Fig. 13 shows error powers produced by the
FRLS-II without FM and the MFRLS-II with FM.
The MFRLS-II removes the inuence of FM very
effectively.
Comparisons with a recently proposed LMS-
based analyzer [25] are also carried out. Simulations
reveal that the modied algorithms are all effective
in mitigating the inuence of FM, and provide
performance that is similar to or even better than
that of the LMS-based analyzer. Fig. 14 presents
a comparison between the modied FRLS-II
(MFRLS-II) and the LMS-based analyzer, where
the RLS-based recursion for the frequency-related
coefcient is used in MFRLS-II. Clearly, MFRLS-
II outperforms the LMS-based analyzer. It should
be noted that the RLS and FRLS algorithms always
provide better DFC estimates compared with the
LMS analyzer if there is no FM. But this absolute
performance advantage of the RLS-type algorithms
over the LMS is not fully retained if a signicant
FM exists. Simulations have revealed that the
MFRLS algorithms may sometimes produce analy-
sis errors that are similar to those generated by the
LMS analyzer. When the FM is very large, other
techniques based on FIR or IIR notch ltering have
to be considered.
5.2. Comparison with a similar algorithm [17]
The MFRLS-I was proposed at the ISCAS2004
(May 2004) [26]. A similar complex-valued RLS
algorithm was proposed in the same year at an
IFAC workshop (September, 2004) [17], which can
be converted into a real-valued version with ease.
We only show equations related to comparison
purpose. The recursions for cosine and sine waves
ARTICLE IN PRESS
0 100 200 300 400 500 600 700 800 900 1000
-15
-10
-5
0
5
10
15
number of iteration n
E
r
r
o
r

s
i
g
n
a
l
s

[
d
B
]
MFRLS-II with FM
FRLS-II without FM
Fig. 13. Comparison between FRLS-II without FM and
MFRLS-II with FM (l
c;1
= l
c;2
= l
c;3
= 0:9995, other condi-
tions the same as in Fig. 11).

d(n)
y(n)
e(n)
a
1
(n)
^
b
1
(n)
b
q
(n)
^
^
a
q
(n)
+
+
+
z
-1
z
-1
c
1
(n)
xa
1
(n)
xb
1
(n)
xa
q
(n)
x b
q
(n)
^
-
-

z
-1
z
-1
c
1
(n)
-
-

z
-1
z
-1
c
q
(n)
-
-

z
-1
z
-1
c
q
(n)
-
-

Fig. 12. Adaptive Fourier analyzer with FM accommodation


function.
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2209
are given by
x
a
i
(n) = cos( ^ o
i
(n))x
a
i
(n 1)
sin( ^ o
i
(n))x
b
i
(n 1), (52)
x
b
i
(n) = sin( ^ o
i
(n))x
a
i
(n 1)
cos( ^ o
i
(n))x
b
i
(n 1), (53)
where ^ o
i
(n) is a frequency estimate for o
i
at
time instant n, which is updated in an LMS-like
way
^ o
i
(n 1) = ^ o
i
(n) m
o;i
b
i
(n)e(n); m
o;i
40, (54)
where m
o;i
is a step size parameter and
b
i
(n) = {^ a
i
(n) sin( ^ o
i
(n))

^
b
i
(n) cos( ^ o
i
(n))]x
a
i
(n 1)
{^ a
i
(n) cos( ^ o
i
(n))

^
b
i
(n) sin( ^ o
i
(n))]x
b
i
(n 1). (55)
Extensive simulations have been conducted to
compare both algorithms in terms of their capabil-
ities of removing the FM, i.e., frequency estimation
MSEs. The signal frequencies, SNRs, and user
parameters such as the forgetting factors and LMS
ARTICLE IN PRESS
0 5000 10000 15000
-100
-90
-80
-70
-60
-50
-40
-30
number of iteration n
M
S
E

o
f

c
1

(
n
)

[
d
B
]
LMS-Based Analyzer
MFRLS-II (RLS)
0 5000 10000 15000
-90
-80
-70
-60
-50
-40
-30
-20
number of iteration n
M
S
E

o
f

c
2

(
n
)

[
d
B
]
LMS-Based Analyzer
MFRLS-II (RLS)
0 5000 10000 15000
-70
-60
-50
-40
-30
-20
-10
0
number of iteration n
M
S
E

o
f

c
3

(
n
)

[
d
B
]
LMS-Based Analyzer
MFRLS-II (RLS)
Fig. 14. Comparisons between the MFRLS-II and LMS-based algorithms (true signal frequency = {0:11p; 0:18p; 0:33p]; user-specied
frequency = {0:1p, 0:2p, 0:3p]; MFRLS-II: l
1
= l
2
= l
3
= 0:975, l
c;1
= 0:99995, l
c;2
= 0:99995, l
c;3
= 0:99965; LMS: m
i
= 2(1 l
i
),
m
c;1
= 0:0001, m
c;2
= 0:0005, m
c;3
= 0:01; s
v
= 0:1, other conditions the same as in Fig. 10, 40 runs). (a) MSE of c
1
(n). (b) MSE of c
2
(n).
(c) MSE of c
3
(n).
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2210
step sizes (m
o;i
; m
c;i
) were changed systematically to
achieve complete and fair comparisons between
them. It has been found on the whole that the
converted algorithm indicates performance similar
to or slightly better than that of the MFRLS-I.
A typical comparison for a single frequency is given
in Fig. 15, where both algorithms provide similar
dynamics and steady-state MSEs.
It should be noted that for the LMS update of the
frequency-related coefcients, 6q multiplications are
required in the MFRLS algorithms, while for the
LMS update of frequencies in the above converted
algorithm, 10q multiplications plus the calculations
of sin( ^ o
i
(n)) and cos( ^ o
i
(n)) for i = 1; 2; . . . ; q
are needed. As a result, the MFRLS algorithms
computationally cost approximately only half the
above converted algorithm does for FM removal.
6. Conclusions
In this paper, four (4) fast RLS (FRLS) algo-
rithms based on the inherent characteristics of the
DFC estimation problem have been presented.
These FRLS algorithms work approximately
the same as the RLS does, but require considerably
less computational cost. Performance analysis
of these FRLS algorithms has been provided in
detail. Difference equations governing their dy-
namics as well as closed-form expressions for their
steady-state MSEs have been derived and compared
with those of the LMS Fourier analyzer. The RLS
and FRLS algorithms have also been modied
to alleviate the inuence of the FM. Extensive
simulations as well as application to real noise
signals have been provided to demonstrate the
relative performance capabilities of the four FRLS
and the RLS algorithms, the validity of analytical
results, and ability of the modied RLS-type
algorithms to mitigate the inuence of FM. Analysis
of the MFRLS algorithms is a topic for further
research.
Appendix A. Steady-state gain matrix F
j
(n)
As seen in Fig. 2, the four elements of F
j
(n) in (9),
converge to small constants that seem to be
determined by the forgetting factor l
j
, as long as
l
j
is close to unity. It is difcult to solve (9)
analytically due to its nonlinearity. However, we
may obtain some insight from (9) by looking at its
steady-state properties. Let the four elements of
F
j
(n) be
F
11
(n)[
no
= F
11
(n 1)[
no
= F
11
,
F
12
(n)[
no
= F
12
(n 1)[
no
= F
12
,
F
21
(n)[
no
= F
21
(n 1)[
no
= F
21
= F
12
,
F
22
(n)[
no
= F
22
(n 1)[
no
= F
22
,
where subscript j is omitted for notational simpli-
city. Now, let us assume that F
1;1
= F
2;2
and
F
1;2
= 0. If the solutions of (9) to be derived under
these assumptions do not pose any contradition, the
solutions so obtained will be the true ones. Taking
ensemble average in both sides of (9) and applying
the assumptions only in the right-hand side of (9),
for each element of F
j
(n) we have
(l 1)F
1;1
= F
2
1;1
1
p
_
p
0
cos
2
x
l F
1;1
dx
_ _
, (56)
(l 1)F
1;2
= F
1;1
F
2;2
1
p
_
p
0
cos x sin x
l F
1;1
dx
_ _
, (57)
(l 1)F
2;2
= F
2
2;2
1
p
_
p
0
sin
2
x
l F
1;1
dx
_ _
. (58)
Clearly, from (56) and (58), one has
F
1;1
= F
2;2
=
2l(1 l)
2l 1
- 2(1 l). (59)
Using the above results in (57), F
1;2
= 0 is reached.
In the sequel, the solutions obtained are in a
complete agreement with the assumptions intro-
duced. This implies that nondiagonal elements of
the correlation metrix in (9) make no contribution
ARTICLE IN PRESS
0 200 400 600 800 1000 1200 1400 1600 1800 2000
-70
-65
-60
-55
-50
-45
-40
-35
number of iteration n
M
S
E
s

o
f

f
r
e
q
u
e
n
c
y

e
s
t
i
m
a
t
e
s

[
d
B
]
MFRLS-I (LMS)
Algorithm [17]
Fig. 15. Comparison between the MFRLS-I with LMS FM
compensation and an FRLS-I like algorithm derived from [17]
(true signal frequency o
1
= 0:3p, user-specied frequency
= 0:305p, l = 0:975, m
c
= 0:00075, m
o
= 0:0005, s
v
= 0:32, 100
runs).
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2211
to the RLS update at steady state of the algorithm,
which support in part the ideas used to develop
FRLS-II, III and IV.
References
[1] International Conference on Harmonics Power System,
Institute of Science and Technology, University of Manche-
ster (Power System Engineering Series). Manchester, UK,
1981.
[2] G.H. Hostetter, Recursive discrete Fourier transform, IEEE
Trans. Acoust., Speech, Signal Process. ASSP-28(4), (April
1980) 183190.
[3] G. Peceli, A common structure for recursive discrete
transforms, IEEE Trans. CAS 33 (10) (October 1986)
10351036.
[4] P. Baudrenghien, The Adaptive Spectrum Analyzer, Stand-
ford University Press, Standford, CA, 1984.
[5] B. Widrow, P. Baudrenghien, M. Vetterii, P. Titchener,
Fundamental relations between the LMS algorithm and the
DFT, IEEE Trans. CAS 34 (7) (July 1987) 814820.
[6] C.A. Vaz, N.V. Thakor, Adaptive Fourier estimation of
time-varying evoked potentials, IEEE Trans. Biomed. Eng.
36 (1989) 448455.
[7] H.C. So, Adaptive algorithm for sinusoidal interference
cancellation, Electron. Lett. 33 (5) (1997) 356357.
[8] T. Umemoto, N. Aoshima, The adaptive spectrum analysis
for transcription, Trans. Soc. Instrum. Contr. Engineers
(SICE) 28 (5) (1992) 619625 (in Japanese).
[9] S. Osowski, Neural network for estimation of harmonic
components in a power system, Proc. Inst. Elect. Eng. part C
139 (2) (March 1992) 129135.
[10] S. Pei, C. Tseng, Real time cascade adaptive notch lter
scheme for sinusoidal parameter estimation, Signal Process.
39 (1994) 117130.
[11] R.R. Bitmead, A.C. Tsoi, P.J. Parker, A Kalman ltering
approach to short-time Fourier analysis, IEEE Trans.
Acoust., Speech, Signal Process. ASSP-34 (12) (December
1986) 14931501.
[12] P. Gruber, J. Todtli, Estimation of quasiperiodic signal
parameters by means of dynamic signal models, IEEE
Trans. Signal Process. 42 (3) (March 1994) 552562.
[13] S.H. Park, W.H. Kwon, O.K. Kwon, M.J. Kim, Short-time
Fourier analysis via optimal harmonic FIR lters, IEEE
Trans. Signal Process. 45 (6) (June 1997) 15351542.
[14] Y. Tadokoro, K. Abe, Notch Fourier transform, IEEE
Trans. Acoust., Speech, Signal Process. ASSP-35 (9)
(September 1987) 12821288.
[15] M.T. Kilani, J.F. Chicharo, A constrained notch Fourier
transform, IEEE Trans. Signal Process. 43 (9) (September
1995) 20582067.
[16] J.R. Glover Jr., Adaptive noise canceling applied to
sinusoidal interferences, IEEE Trans. Acoust., Speech,
Signal Process. ASSP-25 (6) (December 1997) 484491.
[17] M. Niedzwiecki, P. Kaczmarek, Adaptive notch lters based
on combined parametric and nonparametric approach,
IFAC Workshop on Adaptation and Learning in Control
and Signal Processing, September 2004, pp. 439444.
[18] F. Nagy, Measurement of signal parameters using nonlinear
observers, IEEE Trans. Instrum. Meas. 41 (152155)
(February 1992).
[19] CCIT Blue Book, Recommendation Q.23: Technical Fea-
tures of Push-Button Telephone Sets, Geneva, 1989.
[20] CCIT Blue Book, Recommendation Q.24: Multi-Frequency
Push-Button Signal Reception, Geneva, 1989.
[21] G. Arslan, B.L. Evans, F.A. Sakarya, J.L. Pino, Perfor-
mance evaluation and real-time implementation of subspace,
adaptive, and DFT algorithms for multi-tone detection, in:
Proceedings of the IEEE International Conference on
Telecommunications, Turkey, 1996, pp. 884887.
[22] Y. Xiao, Y. Tadokoro, K. Iwamoto, Real-valued LMS
Fourier analyzer for sinusoidal signals in noise, Signal
Process. 69 (2) (1998) 131147.
[23] Y. Xiao, Y. Tadokoro, K. Shida, Adaptive algorithm based
on least mean p-power error criterion for Fourier analysis in
additive noise, IEEE Trans. Signal Process. 47 (4) (1999)
11721181.
[24] N.K. Msirdi, et al., An RML algorithm for retrieval of
sinusoids with cascaded notch lters, Proc. ICASSP (1988)
24842487.
[25] Y. Xiao, R. Ward, L. Ma, A. Ikuta, A new LMS-based
Fourier analyzer in the presence of frequency mismatch and
applications, IEEE Trans. CAS-I 52 (1) (January 2005)
230245.
[26] Y. Xiao, L. Ma, R. Ward, L. Xu, Fast RLS Fourier
analyzers in the presence of frequency mismatch, Proc. of
ISCAS, VI (May 2004) 7376.
[27] S.M. Kuo, D.R. Morgan, Active Noise Control Systems,
Algorithms and DSP Implementations, Wiley, New York,
1996.
[28] Y. Xiao, L. Ma, K. Khorasani, A. Ikuta, A robust
narrowband active noise control system in the presence of
frequency mismatch, IEEE Trans. on Audio, Speech Lang.
Process. 14 (6) (November 2006) 21892200.
[29] B. Widrow, S.D. Stearns, Adaptive Signal Processing,
Prentice-Hall, Upper Saddle River, NJ, 1985.
[30] S. Haykin, Adaptive Filter Theory, third ed., Prentice-Hall,
Upper Saddle River, 1996.
ARTICLE IN PRESS
Y. Xiao et al. / Signal Processing 87 (2007) 21972212 2212

Potrebbero piacerti anche