Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
5 , MAY 1996
I184
I. INTRODUCTION
IRELESS communication systems are witnessing rapid
advances in volume and range of services. A major
challenge for these systems today is the limited radio frequency spectrum available. Approaches that increase spectrum
efficiency are therefore of great interest. One promising approach is to use antenna arrays at cell sites. Array processing
techniques can then be used to receive and transmit multiple
signals that are separated in space. Hence, multiple co-channel
users can be supported per cell to increase capacity. In this paper, we study the problem of separating multiple synchronous
digital signals received at an antenna array [l]. The goal is
to reliably demodulate each signal in the presence of other
co-channel signals and noise. The complementary problem of
transmitting to multiple receivers with minimum interference
at each receiver has been studied in [2]-[4].
Several algorithms have been proposed in the array processing literature for separating co-channel signals based on
availability of prior spatial or temporal information. The traditional spatial algorithms combine high resolution directionfinding techniques such as MUSIC and ESPRIT [5], [6] with
optimum beamforming to estimate the signal waveforms [7],
[8]. However, these algorithms require that the number of
Manuscript received December 7, 1994; revised October 16, 1995. The
work of S. Talwar was suppoited by the Computational Science Graduate
Fellowship Program of the Office of Scientific Computing, U.S. Department
of Energy. The associate editor coordinating the review of this paper and
approving it for publication was Prof. Michael D. Zoltowski.
S. Talwar is with the Scientific Computing and Computational Mathematics
Program, Stanford Univerity, Stanford, CA 94305 USA.
M. Viberg is with the Department of Applied Electronics, Chalmers
University of Technology, S-41296 Gothenberg, Sweden.
A. Paulraj is with the Information Systems Laboratory, Stanford University.
Stanford, CA 94305 USA.
Publisher Item Identifier S 1053-587X(96)03072-3.
1185
11. PROBLEMFORMULATION
Consider d narrowband signals impinging at an array of
m, sensors with arbitrary characteristics. The signal waveform
received at each sensor is demodulated with respect to the carrier frequency (assuming perfect carrier phase lock recovery).
The m x 1 vector of sensor outputs, x(t), in the absence of
multipath, is given by
d
x(t) = X P k a ( B k ) s k ( t )
+ v(t)
=CPkakbk(n)
X(.)
(1)
k=l
.(e,)
k=l1=1
+ v(n).
(3)
k=l
~ ( n=)As(n) + ~ ( n )
(4)
where
x(n) is
the
filtered
data,
s(n)
=
[bl(n). . . b d ( n ) l T , v(n) is additive white noise, and
A is an m x d matrix of array responses scaled by the signal
amplitudes A = hlal . . .pdad].
Assuming that the channel is constant over the available N
symbol periods, we obtain the following block formulation of
the data
X(N) = AS(N) + V ( N )
(5)
where X(N) = [ x ( l ) . . . x ( N ) ] , S ( N=
) [ s ( l ) . . . s ( N ),]and
V ( N )= [ v ( l ) .. .v(N)].The matrix A represents the spatial
structure of the data, and the matrix S represents its temporal
structure. The problem addressed in this paper is the combined
estimation of the array response matrix A and the symbol
matrix S ( N ) ,given the array output X ( N ) . We assume that
the number of signals is known or has been estimated [19]. For
notational convenience, we denote X G X ( N )and S S ( N ) ,
from here on.
111. IDENTIFIABILITY
x
where N is the number of symbols in a data batch (burst),
{bk(.)) is the symbol sequence of the kth user, T is the
symbol period, and 9(.) is the symbol waveform. We let
9 ( . ) be a square-root raised cosine waveform so that matched
filtering yields a pulse that satisfies the Nyquist criterion.
AS = A P ~ P S
=As
1186
td -
td
(id-1
+ . .. + +tl)= i 1
(td-1 +
t2
+t 2 +tl)=
+ (td-1 + . + t 2
(td-1 + . .. + t 2
td -
' '
t l ) = It1
t l ) = fl.
the entries of SI.^ are f l s . Thus, from (13), we see that the
problem is reduced to one of identifiability of binary signals.
For this problem, we have shown previously that t is a trivial
solution, and consequently, T is an ATM.
Finally, the generalization to complex signals is straightforward. In this case, we see that the-signals can be identified
uniquely up to a factor { 1 - 1,+ j , - j } . As before, we have
the system of equations S = T S , where S and S are now
complex matrices with elements in R = { f l ,5 3 , . . . f ( L 1)) 69 {ij,f j 3 , . . . , f j ( L - 1)). We are interested in finding
a condition on the columns of S such that T is an ATM. For
complex signals, we extend the definition of an ATM to a nonsingular matrix with one non-zero element{ & 1,kj), in each
row and column.
We begin by noting that multiplication of complex matrices
is isomorphic to multiplication of real matrices with twice the
dimensions [21]. In particular, we have
Rc{S)
[Im{S}
R.e{T} -Im{T)
Rc{S}] = [Im{T}
Re{T}
Re{S} -Im{S}
[Im{S}
Re{S}
-Im{S}
1 I87
(16)
Proof See the Appendix A.2.
U
We see from (16) that for a fixed value of d, as N increases,
p approaches 1. The probability q = 1- p that one of the L d / 2
distinct vectors is not picked can then be bounded by
IV. MAXIMUM-LIKELIHOOD
ESTIMATION
In this section, we consider the problem of estimating
the digital signals in the presence of noise. From Section
11, we see that the signals can be modeled as unknown
deterministic sequences corrupted by white Gaussian noise:
~ ( n=
) As(n)
~ ( nwhere
)
[v(n)v(k)*] = 0 2 1 S n k .
The maximum-likelihood (ML) estimator yields the following
separable least-squares minimization problem:
in the variables A and S ( N ) , which are, respectively, continuous and discrete. We assume N is large enough to ensure
unique signal and array response estimates.
It is proved in [22] that the minimization can be carried out
in two steps. First, we minimize (17) with respect to A since
it is unconstrained
A = X ( N ) S ( N ) t= x(N)s(N)*(s(N)s(N)*)-l
Then, substituting A back into (17), we obtain a new criterion,
which is a function of S ( N ) only as follows:
1188
= IIRe{X} - Re{A}SII$
V. BLOCKALGORITHMS
The block algorithms ILSP and ILSE were introduced in
[1]. These algorithms take advantage of the ML estimator
in (17) being separable in the variables A and S. The ML
criterion is minimized with respect to the two variables using
an alternating minimizations procedure. The idea is to visit the
received data iteratively until a best fit with the channel (array
response) and signal model is obtained. The ILSP algorithm
performs well with no prior estimate of the array responses,
and can be used to initialize ILSE. For a sufficiently good
initialization, the ILSE algorithm converges rapidly to the ML
estimate of the array responses and signal symbol sequences.
Apart from their efficiency, the algorithms are naturally parallelizable, and can be easily extended for recursive estimation
(see Section VI).
A. ILSP Algorithm
Sk = (A;-~A~-~)-'A;-~X
* S k = proj[Sk]
e
Ak = XS;(SkS;)-'
3) Repeat 2 until (Ak, S k ) = (Ak-1; Sk-1).
In the above description of the algorithm, proj[.] implies
projection onto a discrete alphabet. However, this definition
may be extended to projection onto a constant modulus or
any other signal characteristic. Moreover, in scenarios where
the array manifold structure is applicable, A k can be projected
onto the manifold at each iteration. Hence, the ILSP algorithm
outlines a general approach for imposing known structure on
variables A and S in a minimization criterion of the form
/ / X- ASll$. The main advantage of the algorithm is its
low computational complexity. At each iteration, two leastsquare problems are solved, each requiring U ( N m d ) flops for
N >> m. In particular, ~ m +d2 d 2 ( ~ $1 + rnd2 flops are
required to solve for A, and Nmd+2d2(m- $ ) + N d 2 flops to
solve for S. Thus, the algorithm's complexity is polynomial
in N and d.
Real Signals: If the signals belong to a real alphabet, we
take advantage of this fact by constraining the imaginary part
of S to be zero in each step, and thereby reducing the number
of unknowns by half. Equivalently, we can minimize a slightly
modified criterion f ( A n , S; X,) by noting that
IIX - ASI/$
= II[Rc{X) - Re{A}S] +j[Im{X}
Im{A)S]/l;
= IIXR
+ llIm{X}
Im{A}SII$
ARSII$
W,SECM
//w*x
- sll$.
(19)
proj [(xs:)+x]
(20)
S k + l = proj[SkX+X].
(21)
B. ILSE Algorithm
A limitation of the ILSP algorithm is that its performance
is limited by that of the ML beamformer. This is easily seen
by considering the case where A is known, and the ML
criterion is to be minimized with respect to the variable S
only. In ILSP, S E R is not estimated directly, but in two
steps, (i) least-squares and (ii) projection. The least-squares
step causes noise enhancement if the array response vectors
are not well separated in angle, i.e., A is ill conditioned.
The optimal approach is to enumerate over all possible S
matrices with elements in R,and choose the S that minimizes
I IX - AS 11 $. However, this is computationally demanding,
since LdN matrices need to be considered. Fortunately, the
search can be reduced to enumerating Ld vectors in R ( N
times) by exploiting the following property of the Frobenius
norm:
minI/X-AS//$ = min I l x ( l ) - A s ( l ) l l $ + . . . f
SER
s(1)En
s (IV)E R
(22)
I I89
1190
(24)
(25)
&
5 IIX - Ak-lSkll$
(26)
(27)
1191
Note that in rows 1, 16, 22, 27 etc., there are multiple gray
points. These correspond to cases where A() is singular, thus,
multiple S ( J ) syield the same residual. We have shown a few
of the paths that may be taken by the algorithm, depending
on the initial pair (Ao, SO).Paths to the global minima are
indicated by solid lines, and paths to other fixed points by
dashed lines. We note that for this particular scenario, 56 paths
of the 64 possible paths lead to global solutions, and eight
paths lead to other fixed points.
VI. RECURSIVE
ALGORITHMS
In this section, we consider two classes of recursive algorithms for estimating the received signals. In recursive
estimation, we are interested in solving the following minimization problem at symbol period n
10
20
30
si?) 40
50
60
where X ( n ) = [ X ( n - 1) x ( n ) ] , S ( n=) [ S ( n- 1) ~ ( n ) ] ,
and B ( n ) = diag(a-l, an- i . . . ,1) is a diagonal weighting
matrix for some 0 < Q < 1. Our objective is to compute
A ( n ) and ~ ( n assuming
),
that a good estimate of S(n - 1)
(or equivalently A(. - 1))is available. This estimate may be
obtained blindly by using the block algorithms of the previous
section or by a short training set. The exponential weighting is
used to de-emphasize old data in a time-varying environment.
The fading memory least-squares solution for A ( n ) is given
by
1192
n - l ) X ( n - 1) X*(n - l)x(n)
x * ( n ) X ( n- 1)
x*(n)x(n)
(42)
Multiplying (40) and (42), and using properties of the trace,
we can rewrite (39)
X * ( n ) X ( n )= ["*(
max tr[a2H(n- l ) P ( n ) H * ( n- 1)
s(n)EQ
+ a H ( n l)P(n)s(n)x*(n)
+ aP(n)H*(n l)x(n)s*(n)
-
+ P ( n ) s (n)x*(n)x(n)s*( n ) .]
(43)
H(n) = aiH(n - 1)
+ x(~)s(Tz)*
(44)
where PL
B(n)iS(n)'
In - ' B ( n ) t ~ ( n ) - ' and pB(il)fS(n)is a d-dimensional projection matrix defined by the rows of
S(n)B(n)'2 as follows:
pB(nj4s(n)*
= ~ ( n3 s(n)*
)
( ~ ( n ) ~ ( n ) ~ -(lns )(*~)) (n)
B + . (37)
s ( n ,) E-Q
(39)
B(n)4PB(Ir)
4s(n)"B(n)3 = B(n)S(n)*P(n)S(n)B
(n)
which can be partitioned as in (40) at the bottom of the page.
The above follows by noting that the weighting matrix can be
expressed as
B(n) =
1 '
[ilZB
(
(41)
(45)
VII. SIMULATION
RESULTS
We present the results of three different sets of simulations
in this section. For simplicity, we assume a uniform linear
array of m = 4 sensors. In the first set of simulations, we
study the performance of the block algorithms for a block size
of N = 100. We consider d = 3 digitally modulated BPSK
signals arriving from [IO,16,251" relative to array broadside.
We assume all three signals have equal powers. Starting with
1-
- l ) S * ( n - l ) P ( n ) S ( n- I)B(n - I )
o B ( n - l)S*(n - l)P(n)s(n)
as*(n)P(n)S(n- I)B(n - I)
s* ( n ) P ( n ) s ( n )
(40)
TAI,WAR c t
a/
I193
14
-loo
- -
- _
12fn
0
K
10 -
.c
$
-
*.
--?K
..,mILSP
\
,
m\
%,
8-
,YC
6 6-
1o
;
0
2
3
4
5
6
0
1
-~
0
SNR (dB)
Fig. 2. Bit error rate for ILSP.
loo
1o
- I~
0
SNR (dB)
Fig 3
SNR (dB)
Fig. 4. Number of iterations for ILSP and ILSE.
1194
s2
s3
-i o o 1
SNR (dB)
SNR (dB)
Fig. 5. Comparison of training-based RLSP (*) and RLSE
(0)algorithms
with FA-based ILSP (dashed-dotted line) and ILSE (solid line) algorithms.
ioo 1
cn
Io-~;
I
2
SNR (dB)
Fig. 6.
1195
Alg.
SZ
S1
s3
0.04
-- +
t2
Q2,J
n 2 , 2 =: t 2
0.01
i,
100
50
150
200
I
250
. . . s,]
= [H*l . . +1]
a1,1 =
Ql.1 =
(46)
(47)
f-1
41.
Qk+l,l
Qk+l,2
ak+l.n-l
--
tk+l
+ Qk.1 = Zt1
tk+l
- Q k , l = 41
tk+l
ak+l,n
E tk+l
+ k,?
-
(48)
(49)
(50)
(51)
(52)
= *I
nk.2
It1
= t k +@kpl,l = 0
tk
- ak-1,1 = 0
1196
Ld
Ld-2
LpL1
(53)
P ( k )= P
(1,U
A;
5 CP(A;)
(54)
i=l
since the events 4 are not disjoint. Now, the probability that
+xi and -xi is not picked in the kth trial is one minus the
probability +xi or -xi is picked, i.e., (1- &).Since the trials
Then from (54), we get
are independent, P ( A I ) is (1 -
6).
3-d
1 +I
+1 +1
...
...
-11
t T = -[(3
d)Sl
+ 5 2 + . . . + Sd
31 - 5 2
...
51 - S d ] .
(60)
Since S d is vector with 5 1 entries, there are a finite number of
possibilities for t. The question then becomes whether there
exists another vector of f l s in S such that only a trivial t
is possible. The answer to this question is the vector
We see from (57) that t must satisfy t T s d + l = Sd+l, which
0 yields the relation
(2 - d ) s l
+ 52 + S d
= sd+l.
(61)
+ IC(-1) + ( d
1 - k)(+1) = Sd+I
sd=
-+1
+1
+1
+1
-1
$1
+1
$1
-1
. . . +1. . . +I
_+I
+I
+I
...
. . . +l
-1-
+ ( d - 1)).
(62)
S. Talwar, M. Viberg, and A. Paulraj, Blind estimation of multiple cochannel digital signals using an antenna array, IEEE Signal Processing
L e t t , vol. 2, no. 1, pp. 29-31, Feb. 1994.
D. Gerlach and A. Paulraj, Adaptive transmitting antenna arrays with
feedback, IEEE Signal Processing Lett., vol. 10, no. 1, pp. 150-152,
Oct. 1994.
G. Raleigh. S. Diggavi, V. Jones, and A. Paulraj, A blind adaptive
transmit antenna algorithm for wireless communication, in Proc. IEEE
ICC, 1995.
1197
[29] W. I. Zangwill, Nonlinear Programming: A Unified Approach. Englewood Cliffs, NJ: Prentice-Hall, 1969.
[30] S. S. Haykin, Introduction to Aduplive Filters. New York: Macmillan,
1984.
[31] S. Talwar and A. Paulraj, Performance analysis of blind digital signal
copy algorithms, in Proc. MZLCOM, 1994, vol. I, pp. 123-128.
[32] S. Talwar, A. Paulraj, and M. Viberg, Blind separation of synchronous
co-channel digital signals using an antenna array. Part 11. Performance
analysis, IEEE Trans. Signal Processing, submitted for publication.
[33] A. J. van der Veen, S . Talwar, and A. Paulraj, Blind estimation of
multiple digital signals transmitted over FIR channels, IEEE Signal
Processing Lett., vol. 2, pp. 99-102, May 1995.
G.
H. Golub and C. F. Van Loan, Matrix Computations. Baltimore,
[34]
MD: John Hoplans Univ. Press, 1984.
I I