Sei sulla pagina 1di 22

Minimum Mean Squared Error (MMSE) Equalization

Using Priors
Mi hael T
u hler, Andrew Singer, Ralf Koetter
July 3, 2000

Abstra t
For data transmission over hannels with intersymbol interferen e (ISI) a soft-in
soft-out (SISO) equalizer is introdu ed to orre t for the ISI distortion using linear
ltering.
In addition to the orrupted symbols from the hannel it is assumed that the SISO
equalizer has prior information available about the o urren e probability of the value
of ea h symbol. Algorithms are derived to in orporate this prior knowledge in an
e ient way. Exa t and approximate implementations of the algorithm are derived.
The omplexity of these implementations grows with quadrati and linear omplexity,
respe tively, in the equalizer lter length and the hannel impulse response length.

1 Introdu tion
In many pra ti al ommuni ation s enarios, digital data is transmitted over analog hannels
with an impulse response that is signi antly longer than the symbol period. The resulting
distortion is referred to as inter-symbol interferen e (ISI). Su h hannels o ur, e.g., in
wireless ommuni ation, magneti re ording or underwater data transmission. The re eived
signal is further orrupted by noise introdu ed during transmission over the hannel or in
the re eiver front end. Figure 1 depi ts an un oded data transmission system in its dis retetime baseband representation. The data is mapped to, in general, omplex-valued symbols
hosen from a given signaling onstellation. The hannel is assumed to have a nite-length
impulse response (FIR) with time-invariant oe ients hk ; k = M1 ; M1 + 1    M2 , and
overall response length M = M1 + M2 + 1. The hk are assumed to be known or at least
available as estimates h^ k . The noise is modeled to be additive.
A ommon approa h in many pra ti al systems for re overing of the transmitted data in
presen e of ISI and noise is the use of minimum mean squared error (MMSE) equalization,
as shown in Figure 1. We onsider linear equalizers (LE) and de ision feedba k equalizers
(DFE). Both obtain estimates of the transmitted symbols by linear ltering of the re eived
symbols, while the DFE forms a symbol estimate by also ltering the past de isions. The
fun tional elements of su h equalizers are linear lters, whose parameters are determined
based on the hannel response and a ost riterion, su h as the zero for ing (ZF) or MMSE
riterion.
1

additive
noise

data

Signal
Mapper

ISI
Channel

Equalizer

Inverse
Signal
Mapper

data
estimate

prior information

Figure 1: Outline of an un oded, dis rete-time, baseband data transmission system.


The optimum re eiver with respe t to minimization of the symbol or bit error rate (BER)
is a maximum a-posteriori probability (MAP) dete tor [1, 2. This dete tor does not redu e
ISI, but tries to nd the most likely hannel input symbol given the output symbols disrupted
by noise, whi h is equal to minimization of the BER. Very popular are dete tors performing maximum likelihood sequen e estimation [3, where the most likely symbol sequen e is
obtained to minimize the sequen e error rate. With respe t to the MAP riterion, LE and
DFE approa hes are inherently suboptimal.
In our data transmission s enario, the equalizer also has available prior information about
the value of ea h possible transmitted symbol at any point in time. This prior information
may stem from the original data sour e. This information is assumed to vary on a symbol-bysymbol basis, for example, when the priors arise from the output of a onvolutional de oder
as in a linear \turbo equalization" re eiver [4. The main obje tive of this paper is to provide
a framework for linear equalization te hniques in the ontext of iterative algorithms. The
proposed algorithms in this paper have been implemented in re eivers performing iterative
equalization [4, 5.
In this paper, we will derive equalization algorithms implementing an LE and an DFE
based on the MMSE riterion (MMSE-LE and MMSE-DFE), whi h have the ability to use
the given prior information for performan e improvement. The equalizer an be viewed as
a soft-in soft-out (SISO) devi e, pro essing the re eived symbols and the prior information
to output the soft posterior information about the data. For both approa hes, e ient
implementations are developed. We show the BER performan e of the proposed algorithms
given spe i input prior onstellations and ompare the results to BER-optimal re eiver
algorithms (MAP dete tor).

2 Basi s
For the derivations of the algorithms, we de ne a spe i data transmission system, depi ted
in Figure 2, and note that extensions to more elaborate systems, i.e. multiple hannels or
higher order signal onstellations, are straight-forward. For simpli ity, we assume binary
phase shift keying (BPSK) yielding symbols xn taken from the alphabet f 1; +1g and assumed to be independent at the transmitter. We also assume additive white Gaussian noise
(AWGN), i.e., the noise samples wn are independent and identi ally distributed (i.i.d.) with
normal probability density fun tion (pdf)
fw (w )

= N (0; w ); where
2

N (;  ) ,
2

p 1 exp
2 

(w

)2

2 2


:

(1)

Soft-In Soft-Out Equalizer

xn

ISI Channel
h[n]

zn

MMSEestimator

Mapper

xn

MMSE

Le

(xn)

LMMSE(xn | z)

wn
L(xn)

Figure 2: Parti ular data transmission system used for the derivations.
The hannel output symbols zn are given as
zn

+
M2
X

k= M1

hk xn k

+ wn:

(2)

The sequen e z = [   zn 1 zn zn+1    T is the input of the SISO equalizer, whi h has
furthermore available the prior information L(xn ) for ea h xn , de ned as log-likelihood ratio
(LLR)
L(xn )

Prfxn = +1g
, ln Pr
fx = 1g :
n

(3)

The posterior information of a MAP-based dete tor would be the LLR


LMAP (xn )

fxn = +1 j zg :
, ln Pr
Prfx = 1 j zg
n

The quantity LMAP (xn ) an be broken up using Bayes' rule into the sum of the prior information L(xn ) and the \extrinsi " information LMAP
(xn ), whi h is the knowledge about the
e
value of xn in addition to L(xn ).
LMAP (xn )

Prfzjxn =+1g
Prfxn =+1g
, ln Pr
+ ln
fzjxn = 1g
Prfxn = 1g
P
x p(z j x) Prfxg
Prfxn =+1g
xn=+1
+ ln
, LMAP
(xn ) + L(xn );
= ln P
e
p
(
z
j
x
)
Pr
f
x
g
Pr
f
x
=
1
g
n
x

(4)

xn= 1

where the summation required to ompute LMAP


(xn ) in ludes all possible sequen es x =
e
T
[   xn 1 xn xn+1    . Rather than omputing Eq. (4), whi h may be omputationally
expensive, the proposed SISO equalizer omputes rst the symbol estimate x^n in terms of z
and L(xn ) based on the riterion E f j xn x^n j2 g. Here, the operator E fg is the expe tation
over the pdf of the transmitted symbols xn and that of the noise wn . Output of the SISO
equalizer is the quantity
LMMSE (xn )

fxn = +1 j x^ng = ln p(^xn j xn = +1) + ln Prfxn = +1g


, ln Pr
Prfxn = 1 j x^n g
p(^
xn j xn = 1)
Prfxn = 1g
, LMMSE
(xn ) + L(xn );
e
3

(5)

Soft-In Soft-Out Equalizer


MMSE-Estimator

Linear Filter
c[n]

zn
z

Mapper

xn

MMSE

Le

(xn)

LMMSE(xn | z)

mnx

mn
L(xn)

Figure 3: Stru ture of the MMSE linear equalizer.


whi h an be viewed as an approximation of LMAP (xn ). LMAP
(xn ) is not a fun tion of the
e
MAP
LLR L(xn ), whi h is added separately to yield L
(xn ). A similar onstraint is applied to
the SISO equalizer produ ing LMMSE
(
x
).
To
obtain
LMMSE (xn ), we add LMMSE
(xn ) to
n
e
e
MMSE
L(xn ) and require Le
(xn ) (and thus already x^n ) to not be a fun tion of L(xn ). We de ne
an estimator ful lling the onstraint of not being dependent on prior information about the
quantity to be estimated, to be well-behaved.
For the sequel, some frequently used notation is introdu ed. Ve tors are written in
bold letters and onsidered to be olumn ve tors. Matri es are spe i ed by bold apital
letters. Time-varying quantities are augmented with a time index, e.g. n, as subs ript. The
ij matrix 1ij ontains all ones, 1i is a length i olumn ve tor. The quantities 0ij and 0i
ontain all zeros. The matrix Ii is an ii identity matrix. The ovarian e operator Cov (x; y)
equals E fx yH g E fxgE fyH g, where H is the Hermitian operator. For the de nition of
def
variables we use the notation , instead of = . The omputational omplexity is des ribed
with the O() notation taken from [6.

3 SISO Equalizer using Linear MMSE Estimation


3.1

Linear equalizer as estimator

The SISO equalizer with a linear equalizer as estimator to ompute x^n (MMSE-LE) is depi ted in Figure 3. Part of the estimator is a linear lter with time-varying oe ients
k;n ; k = N1 ; N1 +1    N2 , of length N = N1 + N2 +1. Using the ve tors

, [ xn+M1+N1 xn+M1+N1 1    xn M2 N2 T ;
T
wn , [ wn+N1 wn+N1 1    wn N2 ;
T
zn , [ zn+N1 zn+N1 1    zn N2 ;
z
z
z
z
T
mn , [ mn+N1 mn+N1 1    mn N2 ;


T

n , [ N1 ;n N1 +1;n    N2 ;n ;
xn

(6)

and the N  (N + M 1) matrix


2

h M1

, 664



h M1 +1
h M1

h M1 +1





hM2




h M1

h M1 +1



hM2

...

0
0 7
7

7;
5

(7)

hM2

the estimator output x^n a ording to Figure 3 an be expressed in matrix-ve tor form:
= H xn + wn ;
z
x
x
^n = H
n (zn mn ) + mn :
zn

(8)

The quantity x^n is the linear MMSE estimate de ned in Appendix A of the symbol xn given
the observation zn . From Eqs. (43) and (44) we an identify expressions for the parameters
n , mzn , and mx
n:
= Cov (zn; zn ) 1 Cov (zn; xn ) = (w2 IN + H Cov (xn; xn ) HH )
z
mn = E fzn g = H E fxn g;
mxn = E fxn g;
n

H Cov (xn ; xn );

(9)

given the noise statisti s E fwng = 0N and Cov (wn ; wn)= w2 IN . With the standard assumption that the xn are independent and take on all alphabet symbols equally likely, we have
E fxn g = 0 and Cov (xn ; xm ) = [n m, given the symbol alphabet f1; 1g. It follows the
well-known time-invariant MMSE linear equalizer solution [7:
UP

x
^n

, n = (w2 IN + H HH )
= H
UP

H u; mn

= 0N ;

mxn

= 0;

zn ;

where u is an (N + M 1)  1 unit ve tor de ned as


u

01(N1 +M1 )

01(N2 +M2 )

T

(10)

The resulting spe i parameter set UP is alled the uniform prior (UP) solution, sin e the
assumption of xn being equally likely +1 or 1 is equivalent to prior information L(xn ) given
as
L(xn )

= 0;

8n:

(11)

In general we have L(xn ) 2 R . In this ase the statisti s of xn vary with n resulting in
time-varying parameters ( n ; mzn; mxn ). Taking the prior information L(xn ) into a ount and
assuming independen e of the xn , the rst and se ond order statisti s of xn are as follows:

f g = Prfxn =+1g  1

+ Prfxn = 1g  ( 1)


1
1
exp (L(xn ))
+
= tanh
L(xn ) ;
=
1 + exp (L(xn )) 1 + exp (L(xn ))
2
(

1 tanh2 21 L(xn ) ; if n = m;
Cov (xn ; xm ) =
0;
otherwise:
E xn

(12)

Soft-In Soft-Out Equalizer


MMSE-Estimator

Linear Filter
c[n]

zn

Mapper

xn

MMSE

Le

(xn)

LMMSE(xn | z)

mn

mn
Linear Filter
b
c [n]

Delay

mn-1
^d

xn-1

L(xn)

Figure 4: Stru ture of the MMSE de ision feedba k equalizer.


As mentioned in Se tion 2, x^n should not depend on L(xn ) in order to satisfy the well onstraint. This is a hieved by setting L(xn ) to 0 while omputing x^n . Taking
this modi ation into a ount, the estimate x^n an be obtained from Eqs. (9), (12), and
some straightforward manipulation as follows:
behavedness

1
L(xn ) ;
= tanh
2
x
x
x
x
T
mn , [ mn+M1 +N1 mn+M1 +N1 1    mn M2 N2 ;
x
2
x
2
x
Dn , diag ([ (1 (mn+M1 +N1 ) ) (1 (mn+M1 +N1 1 ) )    (1 (mn
mxn

, H u;

= (w2 IN + H Dn HH + (mxn )2 s sH )
x
x
x
^n = H
H mn + mn s):
n (zn
n

M2 N2 )

) );

(13)

s;

The equation set (13) des ribes the omputation that has to be performed for every symbol
in the re eived sequen e.
3.2

De ision feedba k equalizer as estimator

The SISO equalizer with a de ision feedba k equalizer as estimator (MMSE-DFE) is depi ted in Figure 4. One omponent of the estimator is the forward lter with time-varying
oe ients k;n; k = N1 ; N1 +1    N2 , of length N = N1 + N2 +1 and the stri tly ausal
ba kward lter with time-varying oe ients bk;n; k =1; 2    Nb , of length Nb .
The ba kward path ontains previous estimates x^n i ; i > 0, whi h are dis retized to
x
^dn 2 f+1; 1g (BPSK) by an appropriate sli er h() prior to feedba k, e.g.,
(

x
^dn

= h(^xn ) =
6

1; x^n  0;
1; x^n < 0;

Without forward ltering, the maximum useful length of the ba kward lter, Nb , is M2 ,
sin e only past de ided estimates, x^dn i ; i > 0, are available, whi h ould be used to remove
ISI. If the forward lter has a further delay of about N2 symbols, the ISI aused of N2 + M2
symbols an be pro essed in the ba kward lter. Hen e, we restri t the hoi e of Nb to
Nb

 N 2 + M2 ;

(14)

where M 1 is the maximal value of Nb . Using the quantities already de ned for the MMSELE and the ve tors

, [ x^dn 1 x^dn 2    x^dn Nb T ;


d
d
d
d
T
mn , [ mn 1 mn 2    mn Nb ;
b
b b
b
T
n , [ 1;n 2;n    Nb ;n ;
d

^n
x

(15)

the estimator output x^n is expressed in matrix-ve tor form a ording to Figure 4 as follows:
zn

x
^n

= wn + H xn ;
=



zn

^ dn
x

bn

mzn

mdn

(16)

+ mxn :

The quantity x^n is now the linear MMSE estimate of xn given the observations zn and x^ dn .
From Eqs.(43) and (44) we an identify expressions for the parameters n , bn, mzn, mdn, and
mxn :


+ H Cov (xn; xn ) HH
=
Cov (^
xdn ; x) HH
bn
z
mn = E fzn g = H E fxn g;
d
^ dn g;
mn = E fx
mxn = E fxn g:
n

2 N
w
I

^ dn )
H Cov (xn ; x
^ dn )
Cov (^
xdn ; x

H Cov (xn ; xn )

Cov (^
xdn ; xn )

(17)

The statisti s of x^dn are identi al to the statisti s of xn , sin e we assume that the MMSE-DFE
is error-free:
x
^dn

= xn ;

8n ! E fx^dn g = E fxng; Cov(^xdn; x^dm ) = Cov(xn; xm):

With the parti ular hoi e of the MMSE-DFE ba kward lter length
have
2

^ dn )
Cov (xn ; x

=4

Nb

(18)

in Eq. (14), we

0(M1 +N1 +1)Nb

^ dn ) 5 :
Cov (^
xdn ; x

0(M2 +N2 Nb )Nb

By introdu ing
J

, Cov(^xdn; x^ dn)

d
Cov (^
xn ; xn )

0Nb (M1 +N1 +1)

INb

0Nb (M2 +N2 Nb ) ;

the MMSE-DFE lter parameters an be written as follows:


2 N
w
I + H (Cov (xn ; xn )
H
J H n ;

=
b
n =

 1
^ dn )J) HH
Cov (xn ; x
H Cov (xn ; xn );

whi h an be veri ed from Eq. (17). We see that the DFE output x^n is thus only a fun tion
of the forward lter parameters. Rewriting Eq. (16) with this nding yields
= [mxn+M1 +N1    mxn x^dn 1    x^dn Nb mxn Nb 1    mxn M2
z
H
H xd
d
x
H
x
^n = H
mn )
n H J (^
mn ) + mn = n (zn
n (zn
n
x

mn

T
N2 ;

H mn )

+ mxn :

(19)

Also for the MMSE-DFE, we present at rst the ommon time-invariant (UP) solution,
where the symbols xn are assumed to take on the values 0 and 1 with equal probability.
Using Eq. (12) with prior information L(xn ) as in Eq. (11) yields
UP

, n = (w2 IN + H diag([11(M1 +N1+1)

= 0N ; mdn = 0Nb ; mxn = 0;


x
mn = [01(M1 +N1 +1) x
^dn 1    x^dn Nb
x
x
^n = H
H mn ):
UP (zn

mn

01Nb 11(M2 +N2 Nb ) ) H

H ) 1 H u;

(20)

01(M2 +N2 Nb ) ;

For general priors L(xn ) 2 R we ompute the statisti s of xn and x^dn a ording to Eq. (12).
The algorithm to ompute x^n with an MMSE-DFE in luding the well-behaved onstraint is
now as follows:
N2

= M1 ;

mxn

= tanh

Nb

 M  1;

1
L(xn )
2

N1

=N

M1

1;

= [mxn+M1 +N1    mxn x^dn 1    x^dn Nb mxn Nb 1    mxn M2


x
2
x
2
Dn , diag ([ (1 (mn+M1 +N1 ) )    (1 (mn+1 ) ) 1 01Nb
(1 (mxn Nb 1 )2 )    (1 (mxn M2 N2 )2 ) );
s = H u;
2 N
H 1 s;
n = (w I + H Dn H )
x
x
x
^n = H
H mn + mn s);
n (zn
x
^dn = h(^xn ):
x

mn

T
N2 ;

(21)

4 Computing the SISO equalizer output


The estimation step yields the estimates x^n of the hannel input symbols xn . From that, a
mapping to be derived in this se tion provides the SISO equalizer output LMMSE
(xn ), whi h
e
requires the likelihood fun tions p(^xn j xn = k), k 2 f 1; +1g. We will use the onditional
pdfs fx^n jxn =k (x) to derive them. The prior information L(xn ) is treated as a given parameter
set for every estimate x^n to be omputed, whi h spe ify the o urren e probabilities of all
8

values of symbols xn a e ting x^n . With this approa h we have time-varying pdfs fx^n j xn =k (x)
as fun tion of the prior information L(xi ); i = n M2 N2 ; n M2 N2 +1    n + M1 + N1 . An
exa t implementation for omputing fx^n j xn =k (x) requires an infeasible omplexity for ea h
re eived symbol [4. We follow the approa h of Wang and Poor [8 in deriving an approximate
expression for fx^n j xn =k (x) and LMMSE
(xn ):
e
2
 N ((+1)
n ; n ) ;
( 1)
; n2 ):
1 (x)  N (n

fx^n j xn =+1 (x)


fx^n j xn =

( 1)
2
The time-varying rst- and se ond-order statisti s, (+1)
n , n , and n , are obtained using
Eqs. (13) (MMSE-LE) or (21) (MMSE-DFE), respe tively. The result for the MMSE-LE is

x
x
H
= E f x^n j xn =1g = H
n (E fwn g + H E fxn j xn =1g H mn + mn s) = n
^n j xn = 1g = H
(n 1) = E f x
n s;
2

(+1) 2
n = E fx
^n x^n j xn =1g jn j = E fx^n x^n j xn = 1g j(n 1) j2
2 N
H
x 2
H
(+1) 2
H
(+1) 2
= H
n (w I + H Dn H + (mn ) s s ) n jn j = n s jn j ;

(+1)
n

s;

(22)

using Eqs.(13) and (12). Similarly, the result for the MMSE-DFE is:
= H
n (H E fxn j xn =1g
( 1)
= H
n
n s;
2
H
2 N
H
n = n (w
I + H Dn H

(+1)
n

H E mn

ss

H)
n

j xn =1g) = Hn s;
= H
n

using Eqs. (21), (12), and (18). The SISO equalizer output
approximation is nally:
LMMSE
(xn )
e

= ln

fx^n j xn =+1 (^
xn )

fx^n j xn = 1 (^
xn )

2 x^n (+1)
n
2

2
j(+1)
n j ;

LMMSE
(xn )
e

2 x^n

sH n

(23)

within the hosen


(24)

using Eq. (5). For the MMSE-LE, it is shown [8 that for any given prior information the
signal to noise ratio (SNR), given by

fj j j
f j

g = E fjx^nj2 j xn = 1g = Hn s =
g V arfx^n j xn = 1g n2 1

E x
^n 2 xn =1
V ar x
^n xn =1

sH n

(25)

is greater than the SNR with uniform priors as in Eq. (11) with a maximum if the prior
information resembles the transmitted symbols xn , i.e. L(xn ) 2 f1; 1g.

5 Implementation
In this se tion we examine the omputational omplexity of the algorithms for the SISO
equalizer in presen e of general input priors L(xn ) 2 R . The most expensive al ulation for
omputing LMMSE
(xn ) is the inversion of an N N matrix (see Eqs. (13) and (21)) for ea h
e
re eived symbol zn . A dire t implementation of the inversion requires an order of omplexity
that is ubi in the matrix dimension. An exa t low omplexity implementation requiring a
square order and an approximate implementation requiring a linear order are derived in this
se tion.
9

5.1
5.1.1

Exa t Low Complexity Implementation


SISO equalizer with MMSE-LE

Given Eq. (13) to ompute the estimate x^n with an MMSE-LE, the time-varying matrix to
be inverted is
Rn

, w2 IN + H Dn HH + (mxn)2 s sH :

(26)

A low omplexity approa h for omputing LMMSE


(xn ) an be obtained by exploiting the
e
inherent stru tural properties of the quantities in Eq. (13). The main idea is to express the
1
parameter ve tor n+1 at time step n +1, whi h is omputed by Rn+1
s, as fun tion of n
1
and the inverse Rn . To do this, the following partitioning s heme is introdu ed:
n

Rn
Rn+1

,
,

,



RO
rH
O

rN
rN

n+1

rO

rO

rH
N
RN

,
,




sO

sO

sN
;
sN

(27)




sO sH
sO sO
~rO
x
2
O
;
+ (mn )
sO sH
sO sO
r~O
O



~rH
sN sN sN sH
x
2
N
N ;
~ N + (mn+1 ) sN sN sN sH
R
N

~O
R
~rH
O
r~N
~rN

(28)

~ i , i 2 fO; N g, are (N 1)  (N 1) matri es, i , si , ri , ~ri are length (N 1)


where Ri ; R
olumn ve tors and i , si , ri , r~i are s alars. The subs ripts O denotes quantities at the \old"
or present time step n and N the \next" time step n + 1, respe tively. The derivation of
~ O and R
~ N are identi al. A
an e ient algorithm arises from noting that the submatri es R
similar partitioning s heme for the inverses of Rn and Rn+1 is introdu ed:
Rn

, An ,

AO
aH
O

aO

aO

1
Rn+1

, An+1 ,

aN
aN

aH
N

AN

(29)

where we use the relationship that the inverse of a hermitian matrix is also hermitian. The
inverse An is required to ompute n and is therefore stored and updated at every time step.
~ O from An and then An+1 from R
~ N using the identity
Next we derive a s heme to ompute R
~
~
RO = RN .
The inverse of the submatrix RO of Rn is expressed in terms of omponents of An by
solving Rn An = IN using Eqs. (28) and (29):
N
+ rO aH
O =I
RO aO + rO aO = 0N

RO AO

RO

= AO

1;

aO aH
O

aO

(30)
:

~ O and an outer ve tor produ t as shown. Applying the matrix


The matrix RO is the sum of R
inversion lemma [9 gives
~O
R

= (RO

(mxn )2 sO sH
O)

= RO1 +

10

RO sO sH
O RO
1

(mxn )2

sH
O RO sO
1

(31)

The expression yielding n in Eq. (13) an be written using Eqs. (27) and (29) as:


AO

 

aO

aH
O

sO

(32)

sO

aO

Using Eq. (30), we write RO1 sO in Eq. (31) in terms of


~ O1 :
e ient s heme to ompute R
vO

aO aH
O sO

, RO1 sO = O

~ O 1 = RO 1 +
R

and

aO

O
aO ;
aO
(mxn )2 vO vOH
;
(mxn )2 sH
O vO

aO s O

aO
H
vO vO
= RO1
1
H
sO vO
(mxn )2

O , O ,

= O

and obtain an

(33)

where the ve tor vO was introdu ed to simplify notation. The partitioning for Rn+1 in Eq.
(28) gives an expression to ompute RN1 similar to Eq. (31):
~ N1 sN ;
vN , R
~ N1 sN sH
~ 1
R
N RN
1
x
2
H
1
1
~
~
RN = (RN + (mn+1 ) sN sN )
= RN
1
H ~ 1
(34)
(mxn+1 )2 + sN RN sN

(mxn+1 )2 vN vNH
;
1 + (mxn+1 )2 sH
N vN
where the ve tor vN was introdu ed. Using the partitioning for Rn+1 and An+1 in Eq. (29),
we express AN , aN , and aN in terms of RN , rN , and rN :
0
1
rN ,R
rN ;
~ N1
=R

aN

0
rH
Nr N

(35)
= aN r0 N ;
1
0 0H
AN = RN + aN r N r N ;
where we ordered the equations to optimize the omputation by using the quantity r0 N and
already omputed omponents of An+1 . Similar to Eq. (32) we express the parameter ve tor
n+1 using Eq. (27) and Eq. (29) as
rN

aN

The matrix-ve tor produ t AN


(34), (35), and (36) we obtain
N
N

= aN
=

~N
R

= vN
=

sN

aH
N

aN
aN



sN
:
sN

AN

(36)

is already available using the ve tor

sN

+ aH
N

sN

H
vN vN

H
(mxn+1 )2 + sN
1

H s
vN vN
N

1
(mxn+1 )2

0H

= aN (sN

+ sH
N

vN
x
1 + (mn+1 )2 sH
N vN

vN

vN

r N sN );

+ aN r0 N r0 H
N

+ aN

(r0

N r0 N :

11

0H
r

vN .

From the Eqs.

N N sN

sN

0
r

+ aN
N sN )

sN

(37)

In Eq. (35) we need to in orporate the quantities


(28) and Eq. (13):


rN

rN

2
w

+ H Dn+1

0N 1

H
H

rN

and


1
0N 1

rN .

They are derived using Eq.

+ (mxn+1 )2 s sN :

(38)

The update algorithm is nished by omputing the estimate x^n+1 using Eq. (13) and nally
using Eq. (24). To bootstrap the time-re ursive update algorithm, an initialization is required, e.g., by omputing A1 and 1 using Eq. (13) at the starting time step
n =1.
The matrix-ve tor multipli ation H mxn+1 in Eq. (13) is a part of the onvolution of the
sequen e fmxn g with the hannel response h[n. Hen e, it an be implemented re ursively by
shifting the ve tor H mxn yielding a O(M ) omplexity for this operation:

LMMSE
(xn+1 )
e

mzn

x
H mn
x
H mn+1

=
=

M2
X

hk mxn k ;

k = M1
[mzn+N1 mzn+1 N2
[mzn+1+N1 mzn+N1





mzn N2 T ;
mzn+1 N2 T ;

sin e only one new mean value mzn has to be omputed per re eived symbol. The quantity
H Dn+1 HH [1 01(N 1) T in Eq. (38) is obtained by omputing rst Dn+1 HH [1 01(N 1) T
and multiplying next with H, whi h is an O(M 2 ) operation. Overall we have a dominant
O (N 2 + M 2 ) omplexity per re eived symbol zn , given the estimator lter length N and the
hannel length M .
The update step of the re ursion algorithm is summarized in Figure 5.
5.1.2

SISO equalizer with MMSE-DFE

The derivation of the re ursive update algorithm for the MMSE-DFE is losely related to
the MMSE-LE approa h. We restri t ourselves to the setup
Nb

= N 2 + M2 = M

1;

(39)

whi h is a natural hoi e, sin e the omplete ISI aused by the M 1 is in luded. However,
the following update algorithm requires this hoi e.
Given Eqs. (21) and (39), the time-varying matrix to be inverted now is
Rn

, w2 IN + H Dn HH :

With the partitioning s heme


n

Rn
Rn+1

,
,

,



RO
rH
O

rN
rN

rO

n+1

rO

rH
N
rN

N
N

~O
R


;

~rO
+
r~O

,


sO

sO
sO sH
O
sO sH
O

sO sO

sO sO
~rH
O



H

, ~rr~NN ~~rrNN + (mxn+1)2 ssNN ssN
N

12

sN
;
sN

(40)


;
sN
sN

sH
N

sH
N


;

:
- re eived symbols zn , prior information L(xn ),
- hannel and re eiver hara teristi s h[n, M1 , M2 , w2 ,
- equalizer parameters N1 , N2 ,
- equalizer lter parameters n 1 and inverse of the ovarian e matrix Rn
at time step n 1,
Initialization :
- ompute M , N , H, s, sO , sN , sN , mxn 1 , mxn , mxn , and Dn ,
 = Rn 1 1 , A = 0(N 1)(N 1) ,
- de ne variables  = n 1 , A
a = v = r = = 0N 1 , a = = v =0,
Re ursion step


 :
A a

,
;
A

Partition :
H
Input

Update

A

a
a
A

1
H
aa a

+1

A sN ,
1 + (mxn )2 sH
N v,
(mxn )2
H
A

v v v ,
2
w


a a,

0N 1
A ,

(mxn 1 )2
v vH
(mxn 1 )2 sH
Ov

+ H Dn HH

H r ,
a r,
A + a r rH ,
a sN + aH sN ,
1
v v r,

0N 1

+ (mxn )2 s sN ,


Assemble : A

a aH


;

 

:
x
x
- extrinsi information LMMSE
(xn ) 1 s2H  n H
n (zn H mn + mn s),
e
- equalizer lter parameters: n 
 at time step n,
and inverse of the ovarian e matrix: Rn 1 A

Output

Figure 5: O(N 2 + M 2 ) re ursive update step of the exa t SISO equalizer implementation
(MMSE-LE)

13

~ i , i 2 fO; N g, are (N 1)(N 1) matri es, i , si , ri , ~ri are length (N 1) olumn


where Ri ; R
ve tors and i , si , ri , r~i are s alars, the MMSE-LE solution to update n an be applied
dire tly with slight hanges in Eqs. (31) and (33). The quantities rN and rN required in Eq.
(35) are as follows for the MMSE-DFE:


rN
rN

2
w

0N 1

+ H Dn+1

H
H

1
0N 1

(41)

On e n+1 is obtained, all that is left to ompute is LMMSE


(xn+1 ) using Eqs. (21) and (24).
e
x
The matrix-ve tor multipli ation H mn+1 in Eq. (21) is omputed re ursively from H mxn
similar to the MMSE-LE approa h. In luding the onstraint on Nb in Eq. (39), the update
algorithm is as follows:

, H mxn = H [mxn+N 1    mxn x^dn 1    x^dn Nb T ;


p0;n , [h M1    hM2 01(N 1) [mxn+N    mxn+1 x
^dn    x^dn+1 Nb T ;
x
x
x
H mn+1 = H [mn+N    mn+1 x
^dn    x^dn+1 Nb T
= [p0;n p1;n    pN 1;n T + H [01N (^xdn mxn ) 01(M 2) T :

[p1;n p2;n    pN;n T

Per re eived symbol only the quantities p0;n and H [01N (^xdn mxn ) 01(M 2) T have to
be omputed, whi h are O(M ) operations. The quantity H Dn+1 HH [1 01(N 1) T in Eq.
(41) is omputed from right to left similar to the MMSE-LE approa h, whi h is an O(M 2 )
operation. For the entire update algorithm we have O(N 2 + M 2 ) omplexity per re eived
symbol zn .
The update step of the re ursion algorithm is summarized in Figure 6.
5.2

Approximate low omplexity implementation

We now outline a s heme to ompute x^n in Eqs. (13) and (21) approximately. The main
idea of this approa h is to not ompute the lter parameter ve tor n for ea h re eived
symbol zn as a fun tion of the input priors L(xn ). Instead, we use the time-invariant UP
parameter ve tor UP , given in Eqs. (10) (MMSE-LE) and (20) (MMSE-DFE), whi h have
to be omputed only on e. The e e t of the prior information on the lter parameters is
thus simply negle ted. However, the prior information L(xn ) is still employed to ompute
the time-varying statisti s E fxn g and Cov (xn ; xm ) as in Eqs. (13) and (21). The estimate
x
^n with this linear estimator alled appMMSE-LE is obtained as follows:
= (w2 IN + H HH ) 1 s;
x
x
x
^n = H
UP (zn H mn + mn s);

UP

whi h is an adaptation of the MMSE-LE solution in Eq. (13). The MMSE-DFE solution in
Eq. (21) is modi ed to (appMMSE-DFE)
= [mxn+M1 +N1    mxn x^dn 1    x^dn Nb mxn Nb 1    mxn M2 N2 T ;
2 N
H
UP = (w I + H diag ([11(M1 +N1 +1) 01Nb 11(M2 +N2 Nb ) ) H )
x
x
x
^n = H
H mn + mn s);
UP (zn
x
^dn = h(^xn ):
x

mn

14

s;

:
- re eived symbols zn , prior information L(xn ), previous estimates x^i ; i < n,
- hannel and re eiver hara teristi s h[n, M1 , M2 , w2 ,
- equalizer parameter N ,
- equalizer forward lter parameters n 1 and inverse of the ovarian e matrix Rn
at time step n 1,
Initialization :
- ompute M , Nb M 1, N1 N M1 1, N2 M1 ,
x
H, s, sO , sN , sN , mx
n , mn , and Dn ,
 = Rn 1 1 , A = 0(N 1)(N 1) ,
- de ne variables  = n 1 , A
a = v = r = = 0N 1 , a = = v =0,
Re ursion step
 :



A a
,
;
Partition :

A
H
Input

Update
v
A
v

A

a
a
A


a a,
1
1
H
v vH ,
A
a a a + 1 sH
Ov
A sN ,
1 + (mxn )2 sH
N v,
(mxn )2
H
A


 2 v v v ,
1
w
H
,
+ H Dn H
0N 1
0N 1

A ,

H r ,
a r,
A + a r rH ,
a sN + aH sN ,
1
v v r,

Assemble


: A

a aH


;

 

:
x
x
- extrinsi information LMMSE
(xn ) 1 s2H  n H
e
n (zn H mn + mn s),
- equalizer forward lter parameters: n 
 at time step n,
and inverse of the ovarian e matrix: Rn 1 A

Output

Figure 6: O(N 2 + M 2 ) re ursive update step of the exa t SISO equalizer implementation
(MMSE-DFE)

15

As derived in Se tions 5.1.1 and 5.1.2, omputing the term H mxn is an O(M ) operation.
The al ulation of x^n and nally LMMSE
(xn ) is of order O(N + M ) per re eived symbol zn ,
e
besides the initial omputational load to ompute the parameter ve tor UP .
For the mapping from x^n to LMMSE
(xn ) we again approximate the pdfs fx^n j xn =+1 (x) and
e
(+1)
( 1)
fx^n j xn = 1 (x) with a single Gaussian distribution. The parameters n , n
and n2 are
derived in a manner similar to Eqs. (22) and (23). For the appMMSE-LE and appMMSEDFE we obtain:
( 1)
= E f x^n j xn = 1g = H
UP s = n ;
2
H
2 N
H
n2 = E fx
^n x^n j xn =1g j(+1)
n j = UP (w I + H Dn H ) UP

(+1)
n

2
j(+1)
n j :

The mean values (+1)


and n( 1) are onstant, whereas the varian e n2 is time-varying and,
n
ompared to the e ort to ompute x^n , di ult to obtain. We an upper bound n2 with
(+1) 2
2
Max
= H
UP s jn j , whi h an be shown, e.g., for the MMSE-LE using Eqs. (13):
H

UP

2
H
(+1) 2
(w2 IN + H Dn HH ) UP j(+1)
n j  UP s jn j
H
2 N
H
H
2 N
H
UP (w I + H Dn H ) UP  UP (w I + H H ) UP

Dn

 IN ;

where Dn  IN means that IN Dn is a positive semide nite matrix. The proof for the
MMSE-DFE is similar.
We an also estimate n2 given a length L sequen e of symbol estimates [^x1    x^L T and
prior information [L(x1 )    L(xL )T with the mean ^ 2 = EL fn2 g over the distribution of the
prior information. To ompute ^ 2 , we an use the time average to approximate EL fn2 g:
0


^2

L
BX

 L1 B


n=1
x^n 0

j(+1)
n

x
^n 2

L
X
n=1
x^n <0

j(n 1)

C
x
^n 2 C
A;

where we assumed that xn =1 was sent whenever x^n  0, and vi e versa. The SISO equalizer
output is obtained as follows:
LMMSE
(xn )
e

2 x^n (+1)
2 x^n (+1)
n
n
MMSE
or Le
(xn ) =
:
=
2
2

^
Max

The SISO equalization algorithm (appMMSE-LE or appMMSE-DFE) given a length


quen e of data xn is summarized in the Figures 7 and 8.

16

(42)
L

se-

:
- re eived symbols zn , prior information L(xn ),
- hannel and re eiver hara teristi s h[n, M1 , M2 , w2 ,
- equalizer parameters N1 , N2 ,
Initialization :

- ompute M , N , H, s, mxn tanh 12 L(xn ) for (1 M2 N2 )  n  (L + M1 + N1 ),
- ompute the UP lter parameter ve tor and the mean parameter:

(w2 IN + H HH ) 1 s,

H s,
Compute estimates :
FOR n = 1 TO L DO
mx
[mxn+M1 +N1    mxn M2 N2 T ,
n
x
x
^n
H (zn
H mx
n + mn s),
END.
Compute output :
FOR n = 1 TO L DO LMMSE
(xn ) 2 x^^n2  .
e
Input

Figure 7:

O (N + M )

approximate implementation (appMMSE-LE) of the SISO equalizer

:
- re eived symbols zn , prior information L(xn ),
- hannel and re eiver hara teristi s h[n, M1 , M2 , w2 ,
- equalizer parameters N , Nb  M 1
Initialization :
- ompute N1 N M1 1, N2 M1 , H, s,
- ompute mxn tanh 21 L(xn ) for (1 M2 N2 )  n  (L + M1 + N1 ),
- ompute the UP lter parameter ve tor and the mean parameter:

(w2 IN + H diag ([11(M1 +N1 +1) 01Nb 11(M2 +N2 Nb ) ) HH ) 1 s,

H s,
Compute estimates :
FOR n = 1 TO L DO
mx
[mxn+M1 +N1    mxn x^dn 1    x^dn Nb mxn Nb 1    mxn M2 N2 T ,
n
x
x
^n
H (zn
H mx
n + mn s),
x
^dn
h(^
xn ),
END.
Estimate varian e :
!
Input


^2

PL

x^n 0

Compute output
FOR n

Figure 8:

n=1 

=1

x
^n 2

PL

^n 2
n=1  + x
x^n <0

2x
^n 
^ 2

TO L DO LMMSE
(xn )
e

O (N + M )

.
.

approximate implementation (appMMSE-DFE) of the SISO equalizer


17

6 Results
To obtain BER performan e results, we simulated data transmission (107 bits) of i.i.d. and
BPSK modulated data symbols xn over the length 11 hannel:
H (z )

= 0:04 z 5 0:05 z 4 + 0:07 z 3 0:21 z 2 0:50 z + 0:72


+ 0:36 z 1 + 0:21 z 3 + 0:03 z 4 + 0:07 z 5 :

The SISO equalizer lter parameters were set to (N1 = 7; N2 = 7) for the linear equalizer
and (N1 = 9; N2 = 5; Nb = 10) for the de ision feedba k equalizer.
Several ases were examined. In the rst ase, the re eiver has no prior information
(L(xn )=0; 8n) available. In the se ond ase, prior information L(xn ) was generated a ording to the distribution
(

fL (l )

10  l  10;
else;

1
20 ;

0;

and made available to the re eiver.


For omparison, the four SISO equalizer implementations (MMSE-LE, MMSE-DFE,
appMMSE-LE, appMMSE-DFE) ompete with the performan e of a BER-optimal MAP
dete tor, whi h omputes the posterior information LMAP (xn ). The noise varian e w2 is
al ulated a ording to the SNR (Es =N0 )
10

fj j g = 10 log

E zn 2
log10
N0

10

fj j g PMk=2

j j

E xn 2

2
M1 hk

N0

= 10 log10

PM2

j j

2
k= M1 hk
dB:
2 w2

The results are shown in Figure 9.


For the standard appli ation (no use of prior information), the BER-optimal MAP dete tor gives the best results followed by the MMSE-DFE and the MMSE-LE, whi h represents
the usual order.
Using prior information, the performan e order does not hange. Of parti ular interest is
the moderate performan e degradation if the appMMSE-LE or the appMMSE-DFE is used.
After all, the MAP dete tor provides a 1:0 dB (no use of prior information) and a 0:3 dB
gain (using prior information) ompared to the best SISO equalizer performan e (MMSEDFE) at 10 4 BER. However, for this length 11 hannel the proposed algorithms require a
omputational load that is mu h smaller than that of the MAP dete tor.
The gain due to the use of the prior information, whi h strongly depends on the distribution fL (l), is in the simulated ase 2:4 dB (MMSE-LE), 1:4 dB (MMSE-DFE), and 0:9 dB
(MAP dete tor) at 10 4 BER. Even more promising results were obtained when the SISO
equalizer was used in an iterative approa h for joint equalizatoin and de oding for oded data
transmission [4 or transmission over dual hannels [5, for whi h the algorithms des ribed
here are parti ularly well-suited.
For the ase that the prior information L(xn ) is used alone to de ide on the x^n as follows:
(

x
^n

1;
1;

L(xn )

 0;

L(xn ) < 0;

18

No use of prior information

10

10

BER

10

10

10

10

5
6
Es/No in dB

10

10

With use of prior information

10

10

BER

10

10

10

10

5
6
Es/No in dB

Figure 9: BER performan e of equalization using prior information. Referen es: (|)
MAP dete tor, (x|) prior information only, Proposed algorithms: (   ) MMSE-LE, (o   )
appMMSE-LE, (- -) MMSE-DFE, (o- -) appMMSE-DFE.

19

the BER an be obtained analyti ally from fL (l) by


Prfx^n 6= xn g = Prfx^n = 1jxn =1g Prfxn =1g + Prfx^n =1jxn = 1g Prfxn =1g
Z 1
=
Prfx^n = 1jL(xn ) = lg Prfxn =1jL(xn ) = lg fL (l) dl
1
Z 1
+
Prfx^n =1jL(xn ) = lg Prfxn = 1jL(xn ) = lg fL (l) dl
1
Z 0
Z 1
exp(l)
1
fL (l) dl +
fL (l) dl;
=
1 + exp(l)
1 1 + exp(l)
0
using Bayes' rule and Eq. (3). The se ond plot in Figure 9 in ludes this BER, whi h 0:0693
for the given fL (l). Using additionally the knowledge LMMSE
(xn ) or LMAP
(xn ), respe tively,
e
e
about xn from the hannel, the BER should de rease.

7 Con lusion and Dis ussion


The proposed algorithms were shown to yield a signi ant BER performan e gain for MMSE
equalization if prior information about the transmitted data is in orporated in the omputation of the ompute posterior information about the data.
Several implementations yielding O(N+M ) or O(N 2+M 2 ) omplexity per re eived symbol
were developed. In simulations testing all algorithms, the performan e improved if prior
information was used. The BER-optimal te hniques (MAP dete tor) perform signi antly
better. More likely is thus the appli ation of the introdu ed algorithms as part of more
omplex re eivers, e.g. for oded data transmission [4, where the performan e using MAPbased re eiver omponents ould be met together with tremendous omputational savings
in presen e of long ISI hannels.
An important extension of the presented algorithms is the adaptation to signaling s hemes
other than BPSK for the symbols xn . To do this, the mapping from the estimator output
x
^n to the SISO equalizer output LMMSE
(xn ), and the mapping from the input priors L(xn )
e
to the statisti s of xn in Eq. (12) need to be rederived.
The set of possible implementations an be augmented with intermediate approa hes
between the proposed exa t and approximate solution, e.g. by using a nite pre omputed
set of estimator lter parameters n and quantized prior information to sele t a suitable
parameter ve tor.
Further resear h in ludes a BER performan e analysis of the SISO equalizer using prior
information. Without su h prior information, results for the MMSE-LE and the MMSE-DFE
are already available [7. Another area of urrent and future work in lude the development
of algorithms and analysis results in the ase of an unknown ISI hannel with possibly timevarying impulse response.

20

Xn

p( Z n| Xn )

Zn

Estimator

Xn

Figure 10: BER performan e of equalization using prior information.

8 Appendix A: Linear MMSE Estimation of a Random


Variable
We give the derivation of linear MMSE equalization [10 applied to data transmission over
an ISI hannel as depi ted in Figure 10. Given a sequen e fXn g1
n= 1 of random variables
(r.v.) Xn , we observe the sequen e fZn g1
of
r.v.
Z
,
where
X
n
n ; Zn 2 C . To estimate the
n= 1
value of a parti ular Xn , we look at the ordered subset Zn = [Zn+N1 Zn+N1 +1    Zn N2 T ,
N1 ; N2 2 Z, of the observed r.v., written as length N = N1 + N2 +1 ve tor.
The estimator omputes the quantity X^ n by minimizing thepse ond moment of the magnitude of the error variable Dn = Dnr +Dni { = Xn X^n , where { =
1 and Dnr and Dni are the
^
real and imaginary part, respe tively. To do this, the output Xn is obtained by the following
linear model:
^n
X

= n Zn + dn ; ( n ; dn) =

fj j g

argmin E Dn 2 ;
2C N ; d2C

(43)

where n = rn + in { is a 1  N parameter ve tor and dn = drn + din { a s alar parameter. We


an solve this optimization problem for rn , in , drn , and din using partial di erentiation:

fj j g =  E f(Dnr )2 + (Dni )2g =

 E Dn 2
 rn
 E Dn 2
 in

fj j g =

 rn

2 Ef

f g + Dni RefZn gg = 0N ;

Dnr Im Zn

fj j g =

2 E fDnr g = 0;

fj j g =

2 E fDni g = 0:

E D2
 drn
E D2
 din

2 E fDnr RefZn g + Dni ImfZn gg = 0N ;

The rst two and last two lines an be written more ompa t as
2 E fDn Zn g = 0N and

2 E fDng = 0:

Solving this equation system gives expressions for the parameters n and dn :
= Cov (Xn; Zn ) Cov (Zn ; Zn )
dn = E fXn g n E fZn g:
n

21

(44)

Referen es
[1 R. Chang and J. Han o k, \On re eiver stru tures for hannels having memory," IEEE
Transa tions on Information Theory, vol. IT-12, pp. 463{468, O tober 1966.
[2 L. Bahl et al., \Optimal de oding of linear odes for minimizing symbol error rate,"
IEEE Transa tions on Information Theory, vol. 20, pp. 284{287, Mar h 1974.
[3 G. D. Forney, \Maximum-likelihood sequen e estimation of digital sequen es in the presen e of intersymbol interferen e," IEEE Transa tions on Information Theory, vol. 18,
pp. 363{378, May 1972.
[4 M. Tu hler, \Iterative equalization using priors," Master's thesis, University of Illinois
at Urbana-Champaign, U.S.A., 2000.
[5 J. Nelson, A. Singer, and R. Koetter, \Linear iterative turbo-equalization (LITE) for
dual hannels," Pro . of the Thirty-third Asilomar Conf. on Signals, Systems, and Computers, 1999.
[6 R. Graham, D. Knuth, and O. Patashnik, Con rete
ing, Massa husetts: Addison-Wesley, 1994.

Mathemati s, 2nd Edition

. Read-

[7 J. Smee and N. Beaulieu, \New methods for evaluating equalizer error rate performan e," IEEE Pro eedings on the 45th Vehi ular Te hnology Conferen e, vol. 1, pp. 87{
91, 1995.
[8 X. Wang and H. Poor, \Turbo multiuser dete tion and equalization for oded CDMA
in multipath hannels," IEEE International Conferen e on Universal Press Communi ations, vol. 2, pp. 1123{1127, 1998.
[9 S. Haykin, Adaptive Filter Theory, 3rd Edition. Upper Saddle River, New Jersey: Prenti e Hall, 1996.
[10 H. Poor, An Introdu tion to
Springer Verlag, 1994.

Signal Dete tion and Estimation, 2nd Edition

22

. New York:

Potrebbero piacerti anche