Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
There are many methods and techniques of Direction-of-Arrival estimation which include
classical methods, practical DF methods, higher order statistical methods and many others. The
goal of this project is to investigate various direction finding algorithms and improve their
accuracy with special coverage of higher order statics based methods.
1 Abstract |
Direction-of-Arrival Estimation
Introduction
Propagating fields are often sensed by the array of sensors or transducers arranged in a particular
configuration. These sensors convert received signal into electrical signals. In case of acoustic waves,
these sensors are microphones. Different configurations of antenna arrays are used among which more
popular are uniform linear arrays, uniform planer arrays and circular arrays.
2 Introduction |
Direction-of-Arrival Estimation
This prevents spatial ambiguities. Lowering array spacing below this upper limit will provide us
with redundant information but on the other side will reduce the aperture for fixed number of
sensors.
Thus, Direction-of-Arrival can be estimated if we know the sensor spacing, velocity of the
signal, and time delay of signal. In ideal situation when there is no noise and no multipath
propagation, time delay of signal can be found experimentally from the received data from the
arrays. Thus in this case, the angle of arrival of wave can be taken from
Before going in the details of signal modeling advanced direction of arrival techniques, it is
necessary to have idea of covariance matrix and its decomposition in signal and noise subspaces.
3 Introduction |
Direction-of-Arrival Estimation
4 Introduction |
Direction-of-Arrival Estimation
Linear Algebra
2.1 Vector Spaces
A vector space is a nonempty set V of vectors on which addition and multiplication by scalars
(real numbers) operations are defined subjected to the following axioms:
Figure 2.1: Vector space demonstration (v, w, v+w, 2w and v+2w, all belong to vector space V)
2.1.1 Subspaces
A subspace is also a vector space which consists of appropriate set of vectors from a larger
vector space.
5 Linear Algebra |
Direction-of-Arrival Estimation
Row space of a matrix is the set of all possible combinations of its row vectors.
For three matrices A, B and C, if C = AB then column space of C is the subspace of the column
space of A and the row space of C is the subspace of the row space of B.
So, the rank of matrix tells about the dimensionality of that matrix.
If matrix A is of order mxn and B is of order nxk, assuming that B is full rank matrix whose rank is n
If matrix A is of order mxn and a matrix C is of order lxm, assuming that C is full rank whose rank is m
If A is full rank, then V is the matrix whose columns span the column space of A. Columns of V
are called eigenvectors of A. D matrix contains the eigenvalues of A on its diagonals. Rows of
V 1 span the row space of A.
6 Linear Algebra |
Direction-of-Arrival Estimation
If the rank of A is r (<n), then A will have only r non- zero eigenvalues. The r eigenvectors in V
corresponding to r non- zero eigenvalues of A span the column space of A. The rest of n-r
eigenvectors correspond to zero eigenvalues and are from null space of A.
If the rank of matrix A is r (<n), then A can be reconstructed using non- zero eigenvalues and their
corresponding eigenvectors. Mathematically,
Here, columns of V spans the column space of A. In MATLAB, [V,D] = eig(A) produces V
eigenvectors and D eigenvalue matrix of A.
When A is hermition, V will be the set of orthonormal eigen vectors. For any orthonormal matrix
V, its inverse will be VH . This satisfies above decomposition.
Since, the columns of A are the linear combinations of the columns of V, therefore, A can be
reproduced using V as A = VT where T is any matrix transformation, which gives A when
multiplied with eigenvector matrix V. Now, if the order of A is nxn and the rank of A is r(<n),
only the r eigenvectors which correspond to the non-zero eigenvalues of A can are enough to
reconstruct A as A = Vrxr T where T is any transformation which when multiplied with Vrxr
produces A. If A is hermitian and we only know the matrix V then we can calculate the scaled
columns of A as A = V VH.
If the rank of matrix A is r, then the columns of V corresponding to first r non-zero singular
values of A span the row space of A and the remaining columns of V span the null space of A.
The columns of U corresponding to first r non- zero singular values of A span the column space
of A, and the rest of the columns of U span the null space of AT .
7 Linear Algebra |
Direction-of-Arrival Estimation
Since, the columns of A are the linear combinations of the columns of U, therefore, A can be
reproduced using V as A = UT where T is any matrix transformation, which gives A when
multiplied with eigenvector matrix U. Now, if rank of A is r, only the r eigenvectors which
correspond to the non-zero eigenvalues of A are enough to reconstruct A as A = U nxr T where T is
any transformation which when multiplied with Unxr produces A.
2.4.1 Pseudoinverse
The Pseudoinverse of a matrix A, when A is mxn (m>n), can be calculated using singular value
decomposition as follows.
1
A ( AT A) 1 AT
1
A ( AT A) 1 AT
1
A ((UDV T )T UDV T ) 1 (UDV T )T
1
A (VDT U T UDV T ) 1 (UDV T )T
1
A (VDT DV T ) 1 (UDV T )T
1
A (V ( DT D) 1V T )(UDV T )T
1
A (VD 1 ( DT ) 1V T )(VDTU T )
1
A (VD 1 ( DT ) 1V TVDTU T )
1
A VD 1 ( DT ) 1 DTU T
1
A VD 1U T
Ax b
r Ax b
r UDV T x b
r U DV T x U T b
8 Linear Algebra |
Direction-of-Arrival Estimation
0 U DV T x U T b
0 DV T x U T b
0 U Dy y b /
b/
So, y if singular value is not equal to zero otherwise y has arbitrary value.
i
Since, VT = V
can be solved using singular value decomposition. The result is same as that of least square
problem. The only difference is that, here Ax-b is taken exactly equal to zero.
Ax b 0
9 Linear Algebra |
Direction-of-Arrival Estimation
3.1.2 Moments
If then mth order moment is given by
In particular and . The second order moment is called mean-squared value and
is given by . Moreover,
So, and . If mean is equal to zero then moments and central moments are equal.
A central moment of great importance is variance.
3.1.4 Skewness
Skewness is related to third order central moment. It tells about the degree of asymmetry around
the mean.
Skewness is xero if density function is symmetric about its mean. It is positive if the shape of
density function leans towards right and is negative if the shape of density function leans towards
left.
3.1.5 Kurtosis
It is related to fourth order central moment. It tells about the relative peakedness or flatness of a
distribution about its mean. It is given by
3.1.7 Cumulants
They provide good information for higher order moments. They are derived by taking in account
the the moment generating function’s natural logarithm. Cumulant generating function is as
follows
where x(n) is the input signal. Since, data given is limited, autocorrelation sequence can only b
calculated by finite number of sums as follows:
This is called biased autocorrelation. In order to ensure that the values of x(n) that are calculated
outside the interval [0,N-1] are not included in sum for computational intelligence, we change
the above formula of biased autocorrelation as follows:
In order to ensure that we divide the sum from the number of terms added instead of N to get
good average, we modify the above formula and get the formula of unbiased autocorrelation as
follows:
Since, we know that fourier transform of autocorrelation sequence gives power spectral density.
These estimations of autocorrelation sequence can be used to estimate power spectral density.
For example, spectral estimation by periodogram is given by
These biased, unbiased and unscaled estimations of autocorrelation sequences can be used to
estimate the autocorrelation matrix of any order. In MATLAB, [X Rx] = corrmtx(x,M-1) gives
MXM autocorrelation matrix Rx of input data x.
N SVD N
X U M
M D
M M M M
EVD D
M M M M
M M M M
Frequency Estimation
4.1 Introduction
In array processing, a spatially propagating wave produces a complex exponential signal as
measured across uniformly spaced sensors in an array. The frequency estimation of the complex
exponential is done by determining the angle of arrival of the propagating signal. Therefore, the
direction of arrival problem in array processing is actually frequency estimation problem.
There are many methods of frequency estimation among which my concern was more towards
parametric approaches. The MATLAB code for these techniques is given in Appendix A.
(n) =
(n) =
where
The eigen decomposition of the correlation matrix of above signal (n) is estimated such that the
order of the correlation matrix is M where
M=P+1
16 Frequency Estimation |
Direction-of-Arrival Estimation
which means that the number of eigenvectors of autocorrelation matrix is 1 greater than the
number of complex exponentials. Thus, noise subspace Vn consists of only one eigenvector
corresponding to minimum eigenvalue . Signal subspace consists of P eigenvectors. Since,
signal and noise subspaces are orthogonal, therefore each of P complex exponentials in time
window signal vector model is orthogonal to
There are P peaks in pseudospectrum. is not true power spectrum but it gives good
estimation of frequency.
Figure 4.1: Pisarenko Harmonic Decomposition for frequency estimation of a signal containing
two exponentials.
This method has a limited practical use due to its sensitivity to noise.
dimension of noise subspace is greater than one and is equal to M – P. Averaging over noise
subspace gives improved frequency estimation.
For ( P < m ≤ M )
for all frequencies f p of complex exponentials. The above equation will have M-1 roots, p of
which correspond to the frequencies of complex exponentials. M-P noise eigenvectors share
these roots. Spurious peaks are due to rest of M-p-1 roots. These spurious peaks can be improved
by averaging. So, pseudospectrum of MUSIC algorithm is given by
MUSIC assumes that all noise eigenvalues have equal power , that is, noise is white.
However, in case of estimated correlation matrix, these values will not equal. Smaller the
number of data elements from which correlation matrix is estimated, larger the difference
between the noise eigenvalues.
Figure 4.2: Simulation of frequency estimation of a signal containing two exponentials using
MUSIC.
18 Frequency Estimation |
Direction-of-Arrival Estimation
normalized frequencies (in rad/sample) at which the pseudospectrum is evaluated and p is the
dimension of signal subspace.
This algorithm is same as that of MUSIC in case of white noise, that is, equal noise eigenvalues
.
In MATLAB, [S,w] = peig(x,p) implements the eigenvector spectral estimation method and
returns S, the pseudospectrum estimate of the input signal x, and w, a vector of normalized
frequencies (in rad/sample) at which the pseudospectrum is evaluated and p is the dimension of
signal subspace.
4.5 Comparison
Pisarenko Harmonic Decomposition is the basic technique and is not very much fruitful when
noise increases. MUSIC algorithm is a good approach but sometimes it gives spurious peaks in
the pseudospectrum. These spurious peaks are due to the roots of eigenvectors which do not
19 Frequency Estimation |
Direction-of-Arrival Estimation
Below is the diagram of the comparison of these three techniques when four exponentials are
used. We can clearly see that peaks in the pseudospectrum of Pisarenko Harmonic
Decomposition are not exact whereas the peaks of MUSIC and Eigenvector method are exact.
MUSIC also gives extra small peaks whereas the pseudospectrum of Eigenvector method is quite
smooth where frequency component of imput signal is not present. Moreover, Eigenvector
method is more reliable due to sharp peaks.
20 Frequency Estimation |
Direction-of-Arrival Estimation
Future Work
In this semester, linear algebra and statistical signal processing was studied which was essential
for the literature survey of the project. Eigenvalue decomposition and singular value
decomposition of covariance matrix and signal matrix, distinguishing signal and noise subspace
from the given data and spectrum sensing techniques was a major focus uptil now. Future work
includes the understanding and implementation of various d irection-of-arrival techniques and
their improvement. Future work also involves hardware implementation of modified techniques.
21 Future Work |
Direction-of-Arrival Estimation
Appendix A
% Music
% Hence finding ((P+3)x(P+3)) estimated autocorrelation matrix of input data
[d Rx] = corrmtx(x,M);
[v d] = eig(Rx); % finding eigen value decomposition of input data
[y i] = sort(diag(d)); %sorting eigenvalues
v = v(:,i); %sorting eigenvetors corresponding to increasing eogenvalues
V = zeros(256,1);
for j = 1 : M - P + 1
V = V + fft(v(:,j),256); % Taking fft of noise subspace
end
V = abs(V).^2;
subplot(3,1,2);
plot(0:1/256:1-1/256,-db(V)); % plotting pseudospectrum, P = db(1/V);
22 Appendix A |
Direction-of-Arrival Estimation
% EigenVector Method
subplot(3,1,2)
% plot((y),'--rs'); title('Eigenvalues of the correlation matrix');
% grid; xlabel('Eigen Value Number'); ylabel('Magnitude of Eigen Value
(db)');
V = zeros(256,1);
for j = 1 : M - P + 1
V = V + fft(v(:,j),256)./y(j); % Taking fft of noise subspace
end
V = abs(V).^2;
subplot(3,1,3);
plot(0:1/256:1-1/256,-db(V)); % plotting pseudospectrum, P = db(1/V);
xlabel('Normalized Frequency'); ylabel('Pseudospectrum(db)');
title('Eigenvector method'); grid;
Example Execution:
clc; clear all;
%
P = 4; % No. of frequencies in the signal
%
f1 = 50; f2 = 100; f3 = 150; f4 = 200; %frequencies in the signal
Fs = 400; %sampling frequency of the signal
n = 0 : 1000;
% discrete time signal is
s = exp(2*pi*i*f1/Fs*n) + exp(2*pi*i*f2/Fs*n) + exp(2*pi*i*f3/Fs*n) +
exp(2*pi*i*f4/Fs*n);
x = s + 0.5 * (rand(1,length(s)) + i*rand(1,length(s))); % recieved signal
f_estimation(x,P,P+9);
23 Appendix A |