Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
2
w
. In (2), the vector h(n) C
K1
typically represents the
propagation channel between the PU and K collaborating SUs
and the signal s(n) denotes a standard scalar i.i.d circular
complex Gaussian process w.r.t samples n = 1, 2, , N and
stands for the source signal to be detected with zero mean and
unit variance, i.e. E
_
s
2
(n)
=
2
s
= 0. We stack the observed
data into K N data matrix YYY which may be expressed as
YYY =
_
_
_
_
_
y
1
(1) y
1
(2) y
1
(N)
y
2
(1) y
2
(2) y
2
(N)
.
.
.
.
.
.
.
.
.
.
.
.
y
K
(1) y
K
(2) y
K
(N)
_
_
_
_
_
(3)
As the sample number N , the sample covariance
matrix, RRR =
1
N
Y Y Y Y Y Y
H
, converges to E
_
yy yy yy
H
2
w
RRR =
1
2
w
Y Y Y Y Y Y
H
. Under the
hypothesis H
0
,
RRR is a complex white Wishart matrix subject
to CW
K
(N, III
K
), where III
K
is a KK identity matrix, while
it turns out to be the class of spiked population models under
the hypothesis H
1
[8].
III. GEOMETRIC MEAN DETECTOR - GEMD
Suppose the ordered eigenvalues of RRR and
RRR are 0 <
K
<
K1
< <
1
and 0 <
K
<
K1
< <
1
respectively, with the relationship
k
=
2
w
N
k
(4)
where k = 1, 2, , K.
Let us dene the test statistics for the GEMD as
T
1
(
K
k=1
k
)
1/K
=
1
(
K
k=1
k
)
1/K
=
1
, (5)
where
is the geometric mean of eigenvalues. Also, denoting
H1
H0
(6)
to decide if the target spectrum resource is occupied or not.
Moreover, the notation in (6) stands for the test function which
rejects the null hypothesis when T
>
).
A. Moments of Geometric Mean of Eigenvalues
Note that under the hypothesis H
0
, the normalized co-
variance matrix
RRR is a uncorrelated central Wishart matrix.
The joint probability density function (PDF) of its ordered
eigenvalues = (
1
,
2
, ,
K
) is given by [9]
f
(x
1
, x
2
, x
K
) = C
1
0
|VVV ()|
2
K
i=1
e
xi
x
NK
i
, (7)
where the constant C
0
=
K
i=1
(N i)!
K
j=1
(K j)! and
|VVV ()| is the determinant of the Vandermonde matrix VVV ().
Generally, if the joint PDF of has the following form
f
(xxx) = C
1
1
|(xxx)| |(xxx)|
K
i=1
(x
i
), (8)
where the normalizing constant C
1
= K! C
0
for unordered
eigenvalues; (xxx) and (xxx) are two arbitrary KK matrices
with (i, j) element given as
i
(x
j
) and
i
(x
j
), respectively;
| | stands for the determinant of matrix and () is an arbitrary
function, then the expected value of the product of arbitrary
function
k
() applied to the unordered eigenvalues is given
by [10]
E
_
K
k=1
k
(x
k
)
_
= C
1
1
T
_
AAA
_
, (9)
where
AAA is a KKK tensor whose elements are given as
[11, 12]
a
i,j,k
=
_
0
i
(x)
j
(x)(x)
k
(x)dx,
and the operator T () is dened as [10]
T (
AAA)
sgn()
sgn()
K
k=1
a
k
,
k
,k
,
914
where the sums are over all possible permutations and
of the integers {1, 2, , K}, and sgn() denotes the sign of
permutations.
Using (7), we obtain
i
(x) = x
i1
, (10a)
j
(x) = x
j1
, (10b)
(x) = x
NK
e
x
. (10c)
Choosing
k
(x) = x
p/K
independent of the index k, we
can rewrite (9) as
E
_
K
k=1
x
p/K
k
_
= (K! C
0
)
1
T
_
AAA
p
_
, (11)
where each element of
AAA
p
is denoted as { a
p
}
i,j,k
. Further-
more, considering {a
p
}
i,j,k
independent of k, i.e. { a
p
}
i,j,k
=
{ a
p
}
i,j,1
, we have
T
_
AAA
p
_
= K!
AAA
p
(12)
where
AAA
p
is a K K matrix such that the elements { a
p
}
i,j
can be calculated as
{ a
p
}
i,j
=
_
0
x
i1
x
j1
e
x
x
NK
x
p/K
dx
= (N K +i +j 1 +p/K).
By substituting (12) into (11), the p
th
moment of the
Geometric mean of the eigenvalues can be calculated as
E
_
p
_
= C
1
0
AAA
p
, (13)
By choosing p = 1 and p = 2 in (13), we get rst and second
moment of
respectively as
E
_
_
= C
1
0
AAA
p=1
, (14)
E
_
2
_
= C
1
0
AAA
p=2
, (15)
where
AAA
p
is a K K matrix with each element as (N
K +i +j 1 +p/K). Therefore, the statistical mean and the
variance can be denoted as
and
2
i=1
K
j=1
(1)
i+j
a
1
i,j
BBB
1
i,j
(17)
where
BBB
1
i,j
is the minor of matrix BBB
1
with i
th
row and j
th
column deleted.
Considering the p
th
moment of
1
can be given as
E{
p
1
} =C
1
0
_
0
det(AAA
1
p
, BBB
1
) dx (18)
=C
1
0
m
i=1
m
j=1
(1)
i+j
_
0
{a
1
p
}
i,j
BBB
1
i,j
dx (19)
where the elements of matrix of AAA
1
p
are given by {a
1
p
}
i,j
=
e
x
x
pi,j2
with p
i,j
= p +N K +i +j.
The minor of the matrix BBB
1
can be calculated as
BBB
1
i,j
=
sgn()
K1
k=1
(L
k
,k
, x) (20)
where L
k
,k
is determined by
L
k
,k
=
_
_
_
k +
k
1 if
k
< i and k < j
k +
k
+ 1 if
k
i and k j
k +
k
otherwise,
If the lower incomplete Gamma function is given as [14]
(m, g) = (m1)!
_
1 e
g
m1
i=0
g
i
i!
_
, (21)
then we may rewrite (20) as
BBB
1
i,j
=
sgn()
K1
k=1
(L
k
,k
)
K1
k=1
_
_
1 e
x
L
k
,k
1
l
k
=0
x
l
k
l
k
!
_
_
=
sgn()
K1
k=1
(L
k
,k
)
_
S
(e
x
)
|S|
x
S!
_
(22)
where S is any subset of the set {l
1
, l
2
, , l
K1
} with l
k
from 0 to L
k
,k
1;
S
is the sum over all the elements in
S; |S| is the cardinality of subset S;
S is the sum of all
the elements in the subset S and nally
S! is the product
of the factoring of each element in S. For example, if S =
{l
i1
, l
i2
, , l
i
k
}, then we have |S| = k,
S = l
i1
+ l
i2
+
+l
i
k
,
S! = l
i1
!l
i2
! l
i
k
! and
S
=
(L
i
1
,i
1
1)
li
1
=0
(L
i
2
,i
2
1)
li
2
=0
(L
i
k
,i
k
1)
li
k
=0
;
Especially when S is empty, we have |S| = 0,
S = 0 and
S! = 1.
By substituting (22) into (19), we can derive the p-th
moment for
1
as (23) at the top of the page. With the
moments obtained, we can easily calculate the statistical mean
and the variance as
1
and
2
1
respectively.
915
E{
p
1
} = C
1
N,K
K
i,j=1
(1)
i+j
sgn()
K1
k=1
(L
k
,k
)
_
S
(1)
|S|
(
S +p
i,j
1)
S!(|S| + 1)
S+pi,j1
_
(23)
0 100 200 300 400 500 600 700
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Largest Eigenvalue (1)
C
u
m
u
l
a
t
i
v
e
D
i
s
t
r
i
b
u
t
i
o
n
F
u
n
c
t
i
o
n
Simulation
Gaussian approx.
Gamma approx.
K=10
N=200
K=40
N=400
K=4
N=100
50 100 150 200 250 300
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Geometric Mean of Eigenvalues (
)
C
u
m
u
l
a
t
i
v
e
D
i
s
t
r
i
b
u
t
i
o
n
F
u
n
c
t
i
o
n
Simulation
Gaussian approx.
Gamma approx.
K=4
N=100
K=10
N=200
K=40
N=400
Fig. 1: CDFs of: (a) the largest eigenvalue; (b) the Geometric mean of the
eigenvalues.
V. CALCULATION OF DECISION THRESHOLD
In this section, we calculate the decision threshold for
GEMD based on the Gaussian PDF approximation approach.
The CDFs of the random variables
1
and
may be approx-
imated by a Gaussian distribution function if
1
N
_
1
,
2
1
_
and
N
_
,
2
_
, (24)
where
()
and
2
()
are the mean and the variance of the
random variables and can be determined for
1
as follows
1
E{
p
1
}
p=1
= E{
1
} , (25a)
and the variance of the largest eigenvalue may be calculated
as
2
1
E{
p
1
}
p=2
_
E{
p
1
}
p=1
_
2
,
= E
_
2
1
_
(
1
)
2
. (25b)
Similarly, for
E
_
p
_
p=1
= E
_
_
, (26a)
and the variance of the largest eigenvalue may be calculated
as
E
_
p
_
p=2
_
E
_
p
_
p=1
_
2
,
= E
_
2
_
(
)
2
. (26b)
Remarks: Extended results based on Gamma approxima-
tion approach are also presented in this paper. However, the
relevant analytical work is omitted due to space limitations
and details are included in [15].
The CDFs of the largest eigenvalue and the Geometric
mean of the eigenvalues based on Gaussian and Gamma
approximation approaches are shown in Fig. 1(a) and Fig. 1(b)
respectively for (K, N) = (4, 100), (10, 200) and (40, 400).
The gures illustrate that the analytical and simulation results
are in perfect agreement (compare the black solid line curve
with the red circle markers and the blue triangle markers).
Furthermore, by assuming the independence between
1
and
=
1
/
is given as [16]
g
T
) =
b(
)c(
)
a
3
(
2
1
_
2
_
b(
)
a(
)
_
1
_
+
1
a
2
(
)
1
1
2
1
+
,
(27)
where
a(
) =
2
1
+
1
,
b(
) =
1
2
1
,
c(
) = e
1
2
b
2
(
)
a
2
(
1
2
_
2
1
2
1
+
_
,
(
) =
_
2
e
1
2
u
2
du,
and the CDF of the ratio of Gaussian variable T
can be tightly
approximated as [16]
G
T
) =
_
_
1
_
+
2
1
_
_
, (28)
where () is the CDF of a standard Gaussian random vari-
able.Using (28), we may determine a simple expression for the
decision threshold analytically. For a given target false-alarm
probability (P
fa
), the thresholds (
) is obtained by solving
1 G
T
) = P
fa
, (29)
916
10
6
10
5
10
4
10
3
10
2
10
1
10
0
10
6
10
5
10
4
10
3
10
2
10
1
10
0
False Alarm Probability
M
i
s
s
e
d
D
e
t
e
c
t
i
o
n
P
r
o
b
a
b
i
l
i
t
y
Energy detector
ER detector
GEMD
Fig. 3: Performance comparison of various detectors with GEMD.
0 0.1 0.2 0.3 0.4 0.5
0.5
0.75
1
1.25
1.5
1.75
2
2.25
2.5
Target False Alarm Probability (P
fa
)
D
e
c
i
s
i
o
n
T
h
r
e
s
h
o
l
d
(
)
Simulation
Gaussian approx.
Gamma approx.
K=10
N=600
K=40
N=400
Fig. 2: Decision threshold for GEMD vs. P
fa
.
yielding
+
_
2
1
+
2
2
1
2
1
, (30)
where =
1
(1 P
fa
).
VI. NUMERICAL RESULTS AND DISCUSSIONS
In Fig. 2, we plot the decision threshold
as a function
of P
fa
for selected value of K and N to compare the approx-
imation approaches based on the exact analytical moments
with the respective empirical results. It can be seen that the
proposed Gaussian and Gamma approximation approaches
performs extremely well and the simulation and analytical
results are in perfect agreement. It can also be seen that
the proposed approximations are equivalently good for any
number of collaborating SU and received samples. Also for
our proposed approximation approaches it is not necessary that
both K and N should increase with the same speed, therefore
our results are valid for any number of collaborating SU and
reasonably large number of received samples.
Finally in Fig.3, we show the receiver operating charac-
teristic (ROC) curves for the energy detector, the eigenvalue
ratio (ER) detector, and the Geometric mean detector. In
this gure, we assume a constant modulation transmitted
signal with K = 20 collaborating receivers and N = 200
samples during the sensing time. The SNR is set to be 10
dB, while the noise uncertainty is set to 0.2 dB. It can be
clearly seen that our GEMD signicantly outperforms the
eigenvalue ratio and energy detectors and achieves a better
performance. Moreover, numerical and simulation results show
that the newly derived analytical framework offers an accurate
approaches to calculate the decision threshold for any value
of K and reasonably large values of N.
VII. ACKNOWLEDGMENT
The authors would like to thank Dr. Muhammad Ali Imran and Mr. Wuchen
Tang, University of Surrey, Guildford for useful discussions on this paper and
providing support in producing simulation results.
REFERENCES
[1] M. K. Simon, F. F. Digham, and M.-S. Alouini, On the energy detection
of unknown signals over fading channels, in Proc. IEEE Intl. Conf. on
Communs., May. 2003.
[2] R. Tandra, and A. Saha, SNR walls for signal detection, in IEEE Jour.
Selected Topic in Signal Processing, vol. 2, no. 1, pp. 417, Feb. 2008.
[3] Y. Zeng and Y. C. Liang, Eigenvalue based spectrum sensing algorithms
for cognitive radios, in IEEE Trans. Communs., vol. 57, no. 6, pp.
17841793, Jun. 2009.
[4] F. Penna, R. Garello and M. A. Spirito, Cooperative spectrum sensing
based on the limiting eigenvalue ratio distribution in Wishart matrices,
in IEEE Communs. Letters, vol. 13, no. 7, pp. 507509, Jul. 2009.
[5] F. Penna, R. Garello, D. Figlioli and M. A. Spirito, Exact non-
asymptotic threshold for eigenvalue-based spectrum sensing, in Proc.
ICST Conf. Cognitive Radio Oriented Wireless Networks and Communs.,
CrownCom2009, Hannover, Germany, Jun. 2009.
[6] Y. Zeng, Y.-C. Liang, A. T. Hoang, and R. Zhang, A review on spectrum
sensing for cognitive radio: challenges and solutions, in EURASIP Jour.
Advances in Signal Processing, vol. 2010, Article ID 381465, 2010.
[7] L. S. Cardoso, M. Debbah, P. Bianchi and J. Najim, Cooperative
spectrum sensing using random matrix theory, in Proc. Intl. Symp.
Wireless Pervasive Computing, ISWPC2008, pp. 334338, Santorini,
Greece, Jul. 2008.
[8] I. Johnstone, On the distribution of the largest eigenvalue in principal
component analysis, in Ann. Statist.,, vol. 29, pp. 295327, Jul. 2001.
[9] A. James, Distributions of matrix variates and latent roots derived from
normal samples, in JSTOR Jour. The Annals of Mathematical Statistics,
vol. 35, no. 2, pp. 475501, Feb. 1964.
[10] M. Chiani and A. Zanella, Joint distribution of an arbitrary subset of the
ordered eigenvalues of Wishart matrices, in Proc. IEEE 19th Intl. Symp.
Personal, Indoor and Mobile Radio Communs., PIMRC2008, Cannes,
France, Sep. 2008.
[11] A. Zanella, M. Chiani, and M. Win, On the marginal distribution of
the eigenvalues of Wishart matrices, in IEEE Trans. Communs., vol. 57,
no. 4, pp. 10501060, Apr. 2009.
[12] C. G. Khatri, Distribution of the largest or the smallest characteristic
root under null hypothesis concerning complex multivariate normal
populations, in Jstor Annals. Mathematical Statistics, vol. 35, no. 4,
pp. 18071810, Apr. 1964.
[13] A. Edelman, The distribution and moments of the smallest eigenvalue
of a random matrix of Wishart type, in Jour. Linear Algebra and Its
Applications, vol. 159, pp. 5580, 1991.
[14] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions
with Formulas, Graphs, and Mathematical Tables, 9th ed. Dover
Publication, New York, USA, 1972.
[15] M. Z. Shakir, A. Rao and M.-S. Alouini, Generalized mean detector for
collaborative spectrum sensing, in IEEE Trans. Communs., Submitted.
[16] D. V. Hinkley, On the ratio of two correlated normal random variables,
in Oxford Jour. Biometrika, vol. 56, no. 3, pp. 635639, Dec. 1969.
917