Sei sulla pagina 1di 23

Direction-of-Arrival Estimation

Abstract

Direction-of-Arrival (DOA) estimation deals with the estimation of the direction of


electromagnetic or acoustic signals. Sensors or antenna arrays are used for this purpose. DOA
estimation is used for location and tracking signal sources in both civilian and military
applications such as radar, radio astronomy, wireless communication, navigation, localization
and tracking of various objects and many other applications. Direction-of-arrival can also be
used to adapt the directivity of antenna arrays. In this case, antennas receive the signals from a
specific direction only and neglect the other signals.

There are many methods and techniques of Direction-of-Arrival estimation which include
classical methods, practical DF methods, higher order statistical methods and many others. The
goal of this project is to investigate various direction finding algorithms and improve their
accuracy with special coverage of higher order statics based methods.

1 Abstract |
Direction-of-Arrival Estimation

Introduction

Propagating fields are often sensed by the array of sensors or transducers arranged in a particular
configuration. These sensors convert received signal into electrical signals. In case of acoustic waves,
these sensors are microphones. Different configurations of antenna arrays are used among which more
popular are uniform linear arrays, uniform planer arrays and circular arrays.

Figure 1.1: A uniform linear array receiving two signals

Figure1.2: A uniform planner array receiving two signals

1.1 Propagation Delay in Uniform Linear Arrays


Consider a linear array as shown in the following figure. The distance between sensors in this
array should be less than half the wavelength of the signal, that is

2 Introduction |
Direction-of-Arrival Estimation

This prevents spatial ambiguities. Lowering array spacing below this upper limit will provide us
with redundant information but on the other side will reduce the aperture for fixed number of
sensors.

Figure 1.3: Propagation delay in uniform linear array

Using basic trigonometry, the delay of arrival can be computed as

where v is the velocity of the signal.

Thus, Direction-of-Arrival can be estimated if we know the sensor spacing, velocity of the
signal, and time delay of signal. In ideal situation when there is no noise and no multipath
propagation, time delay of signal can be found experimentally from the received data from the
arrays. Thus in this case, the angle of arrival of wave can be taken from

Before going in the details of signal modeling advanced direction of arrival techniques, it is
necessary to have idea of covariance matrix and its decomposition in signal and noise subspaces.

3 Introduction |
Direction-of-Arrival Estimation

1.2 Outline of the report


Linear algebra chapter gives brief details of the concepts of eigenvalue decomposition
and singular value decomposition and their properties necessary to understand subspace
based approaches.
Frequency estimation chapter gives the details and simulations of some of the subspace
based frequency estimation techniques.
Future work chapter describes briefly the goals to be achieved in future.

4 Introduction |
Direction-of-Arrival Estimation

Linear Algebra
2.1 Vector Spaces
A vector space is a nonempty set V of vectors on which addition and multiplication by scalars
(real numbers) operations are defined subjected to the following axioms:

1. Sum of u and v (i.e v + u) is in V.


2. u+v=v+ w
3. (u + v) + w = u + (v + w)
4. There is a null vector 0 in V such that u + 0 = u.
5. For each u in V, there is a vector –u in V such that their sum gives a null vector.
i. e u + (-u) = 0
6. The scalar multiple of u by scalar c is also in V.
7. c (u + v) = cu + cv
8. (c + d) u = cu + du where c and d are scalars.
9. c(du) = (cd)u
10. 1u = u

where u, v and w are vectors in vector space V.

Figure 2.1: Vector space demonstration (v, w, v+w, 2w and v+2w, all belong to vector space V)

2.1.1 Subspaces
A subspace is also a vector space which consists of appropriate set of vectors from a larger
vector space.

5 Linear Algebra |
Direction-of-Arrival Estimation

2.1.2 Column space and row space


Column space of a matrix is the set of all possible linear combinations of its column vectors.
Column space of a matrix is also called range of that matrix. In MATLAB, B = colspace(A) returns
the basis for the column space of A.

Row space of a matrix is the set of all possible combinations of its row vectors.

For three matrices A, B and C, if C = AB then column space of C is the subspace of the column
space of A and the row space of C is the subspace of the row space of B.

2.1.3 Null Space


If for a matrix A, there is a vector x such that A x = 0 then x is in null space of A. In MATLAB,
Z = null(A) returns the basis for the null space of A. The product AZ is zero.

2.2 Rank of a matrix


The row rank is the maximal number of linearly independent rows of A. Similarly, the column rank of a
matrix A is the maximal number of linearly independent columns of A. Column rank and row rank of a
matrix are always equal.

Rank = Column rank = Row rank

So, the rank of matrix tells about the dimensionality of that matrix.

For any tow matrices A and B which are also multipliable

rank ( AB) min(rank ( A), rank ( B))

If matrix A is of order mxn and B is of order nxk, assuming that B is full rank matrix whose rank is n

rank ( AB) rank ( A)

If matrix A is of order mxn and a matrix C is of order lxm, assuming that C is full rank whose rank is m

rank (CA) rank ( A)

2.3 Eigenvalue Decomposition


A square matrix A can be decomposed in its eigenvalues and eigenvectors in the following form:

Anxn Vnxn DnxnVnxn1

If A is full rank, then V is the matrix whose columns span the column space of A. Columns of V
are called eigenvectors of A. D matrix contains the eigenvalues of A on its diagonals. Rows of
V 1 span the row space of A.

6 Linear Algebra |
Direction-of-Arrival Estimation

If the rank of A is r (<n), then A will have only r non- zero eigenvalues. The r eigenvectors in V
corresponding to r non- zero eigenvalues of A span the column space of A. The rest of n-r
eigenvectors correspond to zero eigenvalues and are from null space of A.

If the rank of matrix A is r (<n), then A can be reconstructed using non- zero eigenvalues and their
corresponding eigenvectors. Mathematically,

Anxn Vnxr DrxrVnxr1

Here, columns of V spans the column space of A. In MATLAB, [V,D] = eig(A) produces V
eigenvectors and D eigenvalue matrix of A.

According to spectral theorem, if a matrix A is hermition, then eigen decomposition of A can be


written as

Anxn Vnxr DrxrVnxrH

When A is hermition, V will be the set of orthonormal eigen vectors. For any orthonormal matrix
V, its inverse will be VH . This satisfies above decomposition.

Since, the columns of A are the linear combinations of the columns of V, therefore, A can be
reproduced using V as A = VT where T is any matrix transformation, which gives A when
multiplied with eigenvector matrix V. Now, if the order of A is nxn and the rank of A is r(<n),
only the r eigenvectors which correspond to the non-zero eigenvalues of A can are enough to
reconstruct A as A = Vrxr T where T is any transformation which when multiplied with Vrxr
produces A. If A is hermitian and we only know the matrix V then we can calculate the scaled
columns of A as A = V VH.

2.4 Singular Value Decomposition


A matrix A can be decomposed by Singular Value Decomposition as

Anxm = Unxn Dnxm VmxmH

If the rank of matrix A is r, then the r eigenvectors of U corresponding to r non-zero singular


values of A span the column space of A and the r columns of V corresponding to r non- zero
singular values of A span the row space of A. D is the diagonal matrix of the singular values of A.
If the rank of A is r, then A can be reconstructed as A = U nxr Drxr Vrxm H.

If the rank of matrix A is r, then the columns of V corresponding to first r non-zero singular
values of A span the row space of A and the remaining columns of V span the null space of A.
The columns of U corresponding to first r non- zero singular values of A span the column space
of A, and the rest of the columns of U span the null space of AT .

7 Linear Algebra |
Direction-of-Arrival Estimation

Since, the columns of A are the linear combinations of the columns of U, therefore, A can be
reproduced using V as A = UT where T is any matrix transformation, which gives A when
multiplied with eigenvector matrix U. Now, if rank of A is r, only the r eigenvectors which
correspond to the non-zero eigenvalues of A are enough to reconstruct A as A = U nxr T where T is
any transformation which when multiplied with Unxr produces A.

Some of the important application of singular value decomposition are as follows.

2.4.1 Pseudoinverse
The Pseudoinverse of a matrix A, when A is mxn (m>n), can be calculated using singular value
decomposition as follows.
1
A ( AT A) 1 AT
1
A ( AT A) 1 AT
1
A ((UDV T )T UDV T ) 1 (UDV T )T
1
A (VDT U T UDV T ) 1 (UDV T )T
1
A (VDT DV T ) 1 (UDV T )T
1
A (V ( DT D) 1V T )(UDV T )T
1
A (VD 1 ( DT ) 1V T )(VDTU T )
1
A (VD 1 ( DT ) 1V TVDTU T )
1
A VD 1 ( DT ) 1 DTU T
1
A VD 1U T

2.4.2 Least Square Problem


Least square problem can also be solved using singular value decomposition. Let a problem be a

Ax b

r Ax b

r UDV T x b

r U DV T x U T b

We know that the magnitude of r should be approximately equal to zero, therefore

8 Linear Algebra |
Direction-of-Arrival Estimation

0 U DV T x U T b

0 DV T x U T b

0 U Dy y b /

b/
So, y if singular value is not equal to zero otherwise y has arbitrary value.
i

Now solve for the following equation



x Vy

Since, VT = V

2.4.3 Solution of Linear Systems


Linear equation of the form

Ax b

can be solved using singular value decomposition. The result is same as that of least square
problem. The only difference is that, here Ax-b is taken exactly equal to zero.

Ax b 0

9 Linear Algebra |
Direction-of-Arrival Estimation

Probability and Statics


Without going in basic details of random variables, I will explain some important terminologies
of random variables and statics.

3.1 Statistical Averages


Statistical averages or moments are evaluated using the mathematical expectation operator.
Theoratically, probability density function is needed to calculate the moments but practically we
calculate moments without the knowledge of probability density function.

3.1.1 Mathematica Expectation


Mean or expected value of a random variable is given by

is the probability of . is the probability density function of . Mean is the


“location” or “center of gravity” of . If is even function, mean is equal to zero.

3.1.2 Moments
If then mth order moment is given by

In particular and . The second order moment is called mean-squared value and
is given by . Moreover,

3.1.3 Central Moments


Let , then mth order moment of is called mth central moment of
.

10 Probability and Statics |


Direction-of-Arrival Estimation

So, and . If mean is equal to zero then moments and central moments are equal.
A central moment of great importance is variance.

3.1.4 Skewness
Skewness is related to third order central moment. It tells about the degree of asymmetry around
the mean.

Figure 3.1: Skewness

Skewness is xero if density function is symmetric about its mean. It is positive if the shape of
density function leans towards right and is negative if the shape of density function leans towards
left.

3.1.5 Kurtosis
It is related to fourth order central moment. It tells about the relative peakedness or flatness of a
distribution about its mean. It is given by

Figure 3.2: Kurtosis

11 Probability and Statics |


Direction-of-Arrival Estimation

Kurtosis is a dimensionless quantity.

3.1.6 Characteristic Functions


The characteristics function of a random variable is given by

Moment generating function is given by

We differentiate with respect to s to get

3.1.7 Cumulants
They provide good information for higher order moments. They are derived by taking in account
the the moment generating function’s natural logarithm. Cumulant generating function is as
follows

Cumulants of a random variable x(ξ) is given as

We can easily see that . Moreover,

12 Probability and Statics |


Direction-of-Arrival Estimation

3.2 Autocorrelation and Autocorrelation Matrix


For unlimited amount of data, theoretically, autocorrelation sequence can be calculated by the
following formula:

where x(n) is the input signal. Since, data given is limited, autocorrelation sequence can only b
calculated by finite number of sums as follows:

This is called biased autocorrelation. In order to ensure that the values of x(n) that are calculated
outside the interval [0,N-1] are not included in sum for computational intelligence, we change
the above formula of biased autocorrelation as follows:

In order to ensure that we divide the sum from the number of terms added instead of N to get
good average, we modify the above formula and get the formula of unbiased autocorrelation as
follows:

In MATLAB, biased and unbiased autocorrelations are calculated by rx = xcorr(x,’biased’) and


rx = xcorr(x,’unbiased’) respectively where x is the input sequence and rx is the resultant
autocorrelation sequence. MATLAB also computes an autocorrelation sequence in which sum is
not scaled. This is given by rx = xcorr(x) and is given by the formula

Since, we know that fourier transform of autocorrelation sequence gives power spectral density.
These estimations of autocorrelation sequence can be used to estimate power spectral density.
For example, spectral estimation by periodogram is given by

13 Probability and Statics |


Direction-of-Arrival Estimation

In MATLAB, [Pxx,w] = periodogram(x) can be used to plot power spectral density by


plot(Pxx,w).

Figure3.3: power spectral density estimate using periodogram

Let us consider a random process given by

is a vector of M values of a process x(n). Now,

Is a pxp matrix. Autocorrelation matrix is the expectation of the above matrix

These biased, unbiased and unscaled estimations of autocorrelation sequences can be used to
estimate the autocorrelation matrix of any order. In MATLAB, [X Rx] = corrmtx(x,M-1) gives
MXM autocorrelation matrix Rx of input data x.

14 Probability and Statics |


Direction-of-Arrival Estimation

Autocorrelation matrix is always toeplitx. In the presence of white noise

If the rank of Rs is r, then Rs can be reconstructed as

Where Vr contains the first r eigenvectors corresponding to r non-zero eigenvalues. These r


eigenvectors which correspond to r non-zero eigenvalues belong to signal subspace. Rest of the
eigenvectors of Rs belong to noise subspace. We can write R x as

Projections of signal and noise subspaces are given respectively by

Since, the two subspaces are orthogonal

Consider a data matrix X whose order is NxM such that


X = [xT (0) xT (1) xT (2) ……. xT (N-1)]T . Then, .

Using above concepts, we get

N SVD N
X U M
M D

M M M M

EVD D
M M M M

M M M M

correspond to signal and noise subspaces respectively.

15 Probability and Statics |


Direction-of-Arrival Estimation

Frequency Estimation
4.1 Introduction
In array processing, a spatially propagating wave produces a complex exponential signal as
measured across uniformly spaced sensors in an array. The frequency estimation of the complex
exponential is done by determining the angle of arrival of the propagating signal. Therefore, the
direction of arrival problem in array processing is actually frequency estimation problem.

There are many methods of frequency estimation among which my concern was more towards
parametric approaches. The MATLAB code for these techniques is given in Appendix A.

4.2 Pisarenko Harmonic Decomposition


This method relies on eigen decomposition of correlation matrix. Correlation matrix is
decomposed into signal subspace and noise subspace. This method is the base of advanced
frequency estimation methods.

Consider a signal that consists of P complex exponentials in noise as follows

(n) =

In the form of a length M time window vector

(n) =

where

The eigen decomposition of the correlation matrix of above signal (n) is estimated such that the
order of the correlation matrix is M where

M=P+1

16 Frequency Estimation |
Direction-of-Arrival Estimation

which means that the number of eigenvectors of autocorrelation matrix is 1 greater than the
number of complex exponentials. Thus, noise subspace Vn consists of only one eigenvector
corresponding to minimum eigenvalue . Signal subspace consists of P eigenvectors. Since,
signal and noise subspaces are orthogonal, therefore each of P complex exponentials in time
window signal vector model is orthogonal to

for all frequencies f p of complex exponentials.

Thus, making use of this property we compute pseudospectrum as

There are P peaks in pseudospectrum. is not true power spectrum but it gives good
estimation of frequency.

Figure 4.1: Pisarenko Harmonic Decomposition for frequency estimation of a signal containing
two exponentials.

This method has a limited practical use due to its sensitivity to noise.

4.3 MUSIC Algorithm


Multiple Signal Classification (MUSIC) frequency estimation approach is the improved version
of Pisarenko Harmonic Decomposition where M-dimensional space is split into signal subspace
and noise subspace using the eigen decomposition of correlation matrix. But contrary to
Pisarenko Harmonic Decomposition, size of time window is taken to be M > P + 1. Therefor, the
17 Frequency Estimation |
Direction-of-Arrival Estimation

dimension of noise subspace is greater than one and is equal to M – P. Averaging over noise
subspace gives improved frequency estimation.

For ( P < m ≤ M )

for all frequencies f p of complex exponentials. The above equation will have M-1 roots, p of
which correspond to the frequencies of complex exponentials. M-P noise eigenvectors share
these roots. Spurious peaks are due to rest of M-p-1 roots. These spurious peaks can be improved
by averaging. So, pseudospectrum of MUSIC algorithm is given by

MUSIC assumes that all noise eigenvalues have equal power , that is, noise is white.
However, in case of estimated correlation matrix, these values will not equal. Smaller the
number of data elements from which correlation matrix is estimated, larger the difference
between the noise eigenvalues.

Figure 4.2: Simulation of frequency estimation of a signal containing two exponentials using
MUSIC.

In MATLAB, [S,w] = pmusic(x,p) implements the MUSIC (Multiple Signal Classification)


algorithm and returns S, the pseudospectrum estimate of the input signal x, and a vector w of

18 Frequency Estimation |
Direction-of-Arrival Estimation

normalized frequencies (in rad/sample) at which the pseudospectrum is evaluated and p is the
dimension of signal subspace.

4.4 Eigenvector Method


This method is the improvement of MUSIC algorithm. For this method, the pseudospectrum is

In this case, the pseudospectrum is normalized by its corresponding eigenvalue.

This algorithm is same as that of MUSIC in case of white noise, that is, equal noise eigenvalues
.

Figure 4.3: Eigenvector method of frequency estimation of a signal containing two


exponentials. Left plot is the plot of eigenvalues and right plot is the plot of pseudospectrum.

In MATLAB, [S,w] = peig(x,p) implements the eigenvector spectral estimation method and
returns S, the pseudospectrum estimate of the input signal x, and w, a vector of normalized
frequencies (in rad/sample) at which the pseudospectrum is evaluated and p is the dimension of
signal subspace.

4.5 Comparison
Pisarenko Harmonic Decomposition is the basic technique and is not very much fruitful when
noise increases. MUSIC algorithm is a good approach but sometimes it gives spurious peaks in
the pseudospectrum. These spurious peaks are due to the roots of eigenvectors which do not

19 Frequency Estimation |
Direction-of-Arrival Estimation

correspond to the required frequency. Eigenvector method is better approach compared to


MUSIC algorithm because it tries to resolve the problem of spurious peaks by weighted average
of the pseudospectrum.

Below is the diagram of the comparison of these three techniques when four exponentials are
used. We can clearly see that peaks in the pseudospectrum of Pisarenko Harmonic
Decomposition are not exact whereas the peaks of MUSIC and Eigenvector method are exact.
MUSIC also gives extra small peaks whereas the pseudospectrum of Eigenvector method is quite
smooth where frequency component of imput signal is not present. Moreover, Eigenvector
method is more reliable due to sharp peaks.

Figure 4.4: Comparison of Pisarenko Harmonic Decomposition, MUSIC and Eigenvector


method using a signal with four exponentials.

20 Frequency Estimation |
Direction-of-Arrival Estimation

Future Work

In this semester, linear algebra and statistical signal processing was studied which was essential
for the literature survey of the project. Eigenvalue decomposition and singular value
decomposition of covariance matrix and signal matrix, distinguishing signal and noise subspace
from the given data and spectrum sensing techniques was a major focus uptil now. Future work
includes the understanding and implementation of various d irection-of-arrival techniques and
their improvement. Future work also involves hardware implementation of modified techniques.

21 Future Work |
Direction-of-Arrival Estimation

Appendix A

MATLAB Code for different Frequency Estimation Techniques:


% This function plots different Frequency Estimation techniques besed on
% eigenvector methods. P is the dimension of signal subspace whose default
% value is 1. M is the dimension of autocorrelation matrix whose default
% value is P+5
function [ ] = f_estimation(x,P,M);
if (exist('M')==0)
M = P + 4;
end
if (exist('P') == 0)
P = 1;
end
%Pisarenko Harmonic Decomposition
%finding ((P+1)x(P+1)) estimated autocorrelation matrix of input data
[d Rx] = corrmtx(x,P);
[v d] = eig(Rx); % finding eigen value decomposition of input data
[y i] = sort(diag(d)); %sorting eigenvalues
v = v(:,i); %sorting eigenvetors corresponding to increasing eogenvalues

V = abs(fft(v(:,1),256)).^2; % fft of noise subspace


subplot(3,1,1);
plot(0:1/256:1-1/256,-db(V)); % plotting pseudospectrum, P = db(1/V);
xlabel('Normalized Frequency'); ylabel('Pseudospectrum(db)');
title('Pisarenko Harmonic decomposition'); grid;

% Music
% Hence finding ((P+3)x(P+3)) estimated autocorrelation matrix of input data
[d Rx] = corrmtx(x,M);
[v d] = eig(Rx); % finding eigen value decomposition of input data
[y i] = sort(diag(d)); %sorting eigenvalues
v = v(:,i); %sorting eigenvetors corresponding to increasing eogenvalues

V = zeros(256,1);
for j = 1 : M - P + 1
V = V + fft(v(:,j),256); % Taking fft of noise subspace
end
V = abs(V).^2;

subplot(3,1,2);
plot(0:1/256:1-1/256,-db(V)); % plotting pseudospectrum, P = db(1/V);

22 Appendix A |
Direction-of-Arrival Estimation

xlabel('Normalized Frequency'); ylabel('Pseudospectrum(db)');


title('MUSIC'); grid;

% EigenVector Method
subplot(3,1,2)
% plot((y),'--rs'); title('Eigenvalues of the correlation matrix');
% grid; xlabel('Eigen Value Number'); ylabel('Magnitude of Eigen Value
(db)');

V = zeros(256,1);
for j = 1 : M - P + 1
V = V + fft(v(:,j),256)./y(j); % Taking fft of noise subspace
end
V = abs(V).^2;

subplot(3,1,3);
plot(0:1/256:1-1/256,-db(V)); % plotting pseudospectrum, P = db(1/V);
xlabel('Normalized Frequency'); ylabel('Pseudospectrum(db)');
title('Eigenvector method'); grid;

Example Execution:
clc; clear all;
%
P = 4; % No. of frequencies in the signal
%
f1 = 50; f2 = 100; f3 = 150; f4 = 200; %frequencies in the signal
Fs = 400; %sampling frequency of the signal
n = 0 : 1000;
% discrete time signal is
s = exp(2*pi*i*f1/Fs*n) + exp(2*pi*i*f2/Fs*n) + exp(2*pi*i*f3/Fs*n) +
exp(2*pi*i*f4/Fs*n);
x = s + 0.5 * (rand(1,length(s)) + i*rand(1,length(s))); % recieved signal

f_estimation(x,P,P+9);

The result of above code is given in figure 4.4

23 Appendix A |

Potrebbero piacerti anche