Sei sulla pagina 1di 38

1

3 :


2
Summary of last talk
The quantity on the left is called the log Likelihood
ratio and denoted by
1
0
( | )
log ( ) log
( | )
=
f r H
r
f r H
0 0
1 1
: ( , )
: ( , )
H r gs n
H r g s n
=
=
A solution for the binary decision case
1
0
e
1
0
( )
( ) log
( )
H
P H
r
P H
H

<
%
3
1. Extensions of the detection Problem
We want to extend the results in two directions
1-D case with M- hypotheses
k- dimensions case with M- hypotheses
We will solve the problem in the general case
k- dimensions case with M- hypotheses
4
Vector Space-Revisit
5
Subspace
6
Normed Linear Space
7
Inner Product Scalar Product
2
< >= v, v V
8
k- Dimensions Detection Problem /M- hypotheses
Given a communication system with the following M-
hypotheses:
are known vectors which are transmitted by the
source
Example:
( ) ( ), {1,.., } = =
i i
P symbol wastransmitted P H i M S
( )
1
,..., ,.., , 1,...,
T
i i ij ik
s s s i M = = S
1 2 3 4
(1,1) , ( 1,1) , ( 1, 1) , (1, 1)
T T T T
= = = = S S S S
, 1,..,
i
i M = S
9
Vector Notation
The H
i
hypotheses is given by
11
s
R
1
S
n
12
s
1 1 1
:
| | | | | |
| | |
| | |
| | | = +
| | |
| | |
| | |
\ \ \
i
i
k ik k
r s n
H
r s n
( ) ( )
( )
1 1
1
or in vector notation
:
...., .... ; ...., ....
...., ....
= +
= =
=
i i
T T
k k
T
i i ik
H
r r n n
s s
R S n
R n
S
10
k-D Detection Problem /M- hypotheses
Given a communication system with the following M-
hypotheses:
are known vectors which are transmitted by the
source
is a discrete random variables vector with a known
probability density function. The components of the vectors are
independent
( ) ( ), {1,.., } = =
i i
P symbol wastransmitted P H i M S
: , 1,...,
i i
H i M = + = R S n
1
( , 1,..., ) ( )
k
n j j j j n j j
j
P x n x dx j k f x dx
=
< < + = =

, 1,..,
i
i M = S
i
S
n
R
n
11
Objective of the Decision Problem
Find the optimal receiver (in the sense of minimum P
e
) which
will make a decision based on the observation of R such that
P
e
is minimal for all possible receivers, or equivalently
maximizing the probability of correct decision
1
1
1 ( ) ( | )
=

=
=
E c
M
i i i
i
P P
P H P H decided H send
12
Bayes Decision Rule
The channel mapping from the message set to a
received signal
The receiver has to assign for each value of an hypothesis
i
H
R
( ) Receiver decides when was received
i i
D H H = R R
Let the k dimension space be
divided into M subsets:
{ : ( ) , }
i i
D H = = A R R R
R
1
( ) A R
2
( ) R
3
( ) A R
4
( ) A R
1
{ ,.., }
M
S S
R
13
The Subsets
The subsets
satisfy:
1
2.
M
i
i
A
=
= U
1.
i j
A A for all i j = I
1
( ) , ..., ( )
M
A A R R
For each value of
we have to make a
decision
We must make a
decision
R
1
( ) , ..., ( )
M
A A R R
14
How to make decision ?
We need to find the optimal receiver (in the sense of minimum
probability of error) which will make a decision based on the
observation of R such that P
e
is minimal
Let us assume that
Then
( | ) ( ), 1,..,
i i
f f i M = =
R n
R S R S
Pr [ | ] ( )
i
i i i
A
ob H H f d =

n
R S R
The probability of a correct decision is:
1
1
( ) Pr [ | ]
( ) ( )
=
=


=
=
M
c i i i
i
M
i i
i
A
i
P P H ob H H
P H f d
n
R S R
15
Using the G function
Let us define the functions
1
( )
0
i
i
i
A
G
A

R
R
R
1 1
( ) Pr [ | ] ( ) ( ) ( )
= =

= =

M M
c i i i i i i
i i
P P H ob H H P H G f d
n
R R S R
Therefore,
1 1
( ) Pr [ | ] ( ) ( ) ( )
= =
= =


M M
c i i i i i i
i i
A
i
P P H ob H H P H G f d
n
R R S R
No change in the value of the integrand
1 1
( ) Pr [ | ] ( ) ( ) ( )
= =

= =


M M
c i i i i i i
i i
P P H ob H H P H G f d
n
R R S R
No change ( ) 0 =
i i
G f or A R R
16
The Optimal Decision Rule
We must choose the decision regions for H
i
in such a manner
that the probability of a correct decision will be maximized.
The probability of correct decision will be maximized if for
every R the integrand will be maximized
1 1
( ) Pr [ | ] ( ) ( ) ( )
= =

= =

M M
c i i i i i i
i i
P P H ob H H P H G f d
n
R R S R
For a given R there is some j s. t.,
( ) 1 ( ) 0 = =
j i
G and G for i j R R
Therefore, the decision rule will be:
{ | ( ) ( ) ( ) ( ), ) = >
j j j i i
A P H f P H f for all i j
n n
R R S R S
17
Another version
The decision rule
can also be written as: Decide H
j
that solve the
equation
{1,.., }
max ( ) ( | ) ( ) ( | )
i i j j
i M
P H f P H f

= R S R S
{ | ( ) ( ) ( ) ( ), ) = >
j j j i i
A P H f P H f for all i j
n n
R R S R S
{1,.., }
max ( ) ( ) ( ) ( )

=
i i j j
i M
P H f P H f
n n
R S R S
In the more general case (any channel with transition
probability)
Pr [ | ]
( | )
j
j
ob d
f Lim
d
< < +
=
R x R R S
R S
x
18
Remainder: Circular Gaussian random
noise vector
0
n
1
n
2
n
( ) p n
A circular Gaussian real random k-dim vector is defined
as zero mean i.i.d (independent and identically
distributed) Gaussian vector with probability density
function
2 2 2
1 2
1 2
2 /2 2
2
2 /2 2
... 1
( ) ( , ,..., ) exp
(2 ) 2
1
exp
(2 ) 2


| |
+ + +
= = |
|
\
| |
|
=
|
\
k
k
k
k
n n n
p p n n n n
n
19
Example 1:
M Hypotheses
k- dimensions known vectors
The noise
The received signal
Find the optimal receiver
1
{ ,.., }
M
S S
.
1
( )
i
P H for all i
M
=
2 2 2
1 2
1 2
2 / 2 2
2
2 /2 2
... 1
( ) ( , ,..., ) exp
(2 ) 2
1
exp
(2 ) 2


| |
+ + +
= = |
|
\
| |
|
=
|
\
k
k
k
k
n n n
p p n n n n
n
( )
( )
( )
1
1
1
:
...., .... ;
...., ....
...., ....
= +
=
=
=
i i
T
k
T
k
T
i i ik
H
r r
n n
s s
R S n
R
n
S
20
The Optimal receiver
The decision rule is
r ( ) 1 G r = ( ) 0 G r =
{ | ( ) ( ) ( ) ( ), ) = >
j j j i i
A P H f P H f for all i j
n n
R R S R S
Since, all the hypotheses have the same aprior probability we get
2
2
2 2
{ | exp( ) exp( ), )
2 2


= >
j
i
j
A for all i j
R S
R S
R
2
1 1 2 2
2 /2 2
1
( ) ( , ,..., ) exp
(2 ) 2
| |
|
= =
|
\
n i i k ik
k
f p r s r s r s
i
i
R - S
R - S
or
2
2
{ | , ) = <
j j i
A for all i j R R S R S
21
Conclusions
When the noise is a circular Gaussian real random
k-dim vector ( zero mean and i.i.d ) and all the
hypotheses are equally likely. Than, the optimal
receiver select the hypothesis with the minimum
Euclidean distance from the received signal.
The decision domain of A
j
is all points that are closer
to S
j
than to any other signal.
22
Geometrical Observation
1
S
R
2
( ) = D R S
2
S
3
S
4
S
6
S
7
S
5
S
23
The decision regions for the Minimum Distance
Criteria
2
( ) D = R S
R
1
S
2
S
3
S
4
S
6
S
7
S
5
S
24
A general formula for the probability of
error
An error event occur when the decision is wrong,
i.e the received signal fall in a decision domain of
another hypothesis and the probability of error is
given by
1
1
\
1 1 ( ) ( )
( ) ( )
=
=

= =
=


M
e c i i
i
A
i
M
i i
i
A
i
P P P H f d
P H f d
n
n
R S R
R S R
25
Example 2:
4 equally likely hypotheses
( )
( )
1 2 3 4
1 2 3 4
:
, , , . ;
, , ,
= +
=
=
i i
T
T
H
r r r r
n n n n
R S n
R
n
1
2
3
4
( , , , )
( , , , )
( , , , )
( , , , )
T
T
T
T
a a a a
a a a a
a a a a
a a a a
=
=
=
=
S
S
S
S
2 4
2 2 2
1
1
( ) xp( )
(2 ) 2
i
i
n
p e

=
=

n
The noise vector is
What is the optimal receiver?
26
The decision rule can also be written as
4
2 2
2
1 1 2 3 4
1
4
2 2
1 2 3 4
1
( , , , ) ( )
( ) 4
=
=
= =
= + + + +

i
i
i
i
r a r a r a r a r a
r a r r r r a
R S
,
2
2
{ | , ) = <
j j i
A for all i j
or
R R S R S
4
2 2
2 2
2 1 2 3 4 1 2 3 4
1
( , , , ) ( ) 4
=
= + + = + +

i
i
r a r a r a r a r a r r r r a R S
4
2 2
2 2
3 1 2 3 4 1 2 3 4
1
( , , , ) ( ) 4
=
= + + = + +

i
i
r a r a r a r a r a r r r r a R S
4
2 2
2 2
4 1 2 3 4 1 2 3 4
1
( , , , ) ( ) 4
=
= + + = + +

i
i
r a r a r a r a r a r r r r a R S
27
Maximum Correlation Criteria for
Equal Energy Signals
The decision rule can also be written as
2
2
2
2 2 2
2
2
{ | , )
{ | 2( , ) 2( , ) , )
and for Equal Energy Signals for alli and j
{ | ( , ) ( , ), )
= <
= + < +
=
= >
j j i
j j j i i
j i
j j i
A for all i j
or for real vectors
A for all i j
A for all i j
R R S R S
R R R S S R R S S
S S
R R S R S
,
1
( , ) Inner Producat/Scalar Product
k
i j ij
j
r s
=
=

R S
28
An Upper bound on the Error
Probability
We know that for a given messages set
i
H
, i=1,.., M. and an
observation vector
R
, an error will occur if
j
A R

when
i
H
was transmitted
Therefore,

|
Pr [ | ] 1 Pr [ | ]
( | )
i
c
i
c
E H i i i i
i
A
P ob A H ob A H
f H d
= =
=

R R
R R

Here
( | )
i
f H R
is the probability density function of
R
given
i
H
was transmitted. Here, the integration is over a subspace of the
observation space.
29
Error Probability (cont.)
Assuming,
( ) 1/ , 1
i
P H M i M =
.The overall error probability
is the average of the message error probability

1 1
1 1
1
( | )
1
( ) ( | )
j
M M
E i
A
i j
j i
M M
j i
i j
j i
P f H d
M
G f H d
M
= =

= =

= =
=




R R
R R R

where

1
( )
0
j
j
j
A
G
A

R
R
R

30
Error Probability (cont.)
The trick
a.
( | ) ( | ),
j j k
A f H f H for all j k R R R

Therefore, for a given i

( | )
1
( | )
j
j
i
f H
A
f H

R
R
R

b.
( | )
1, 0 1
( | )
s
j
j
i
f H
A s
f H
(

(

R
R
R

Thus
( | )
( ) , 0 1
( | )
s
j
j
i
f H
G s
f H
(

(

R
R
R

31
Upper bound
and for s=1/2 we obtain

( )
1 1
1/ 2
1 1
1/ 2
1 1
1
( ) ( | )
( | )
1
( | )
( | )
1
( | ) ( | )
M M
E j i
i j
j i
M M
j
i
i j
i
j i
M M
j i
i j
j i
P G f H d
M
f H
f H d
M f H
f H f H d
M

= =

= =

= =

`

)







R R R
R
R R
R
R R R

Thus we get the bound
( )
1/ 2
.
( 1) max ( | ) ( | )
E i j j i
P M f H f H d



R R R

32
Example: Gaussian Channel M Hypotheses,
K Dimensions

/ 2
2
0 0
/ 2
2
1
0 0
1
( | ) exp( )
1
exp( )
K
j
K
K
k k
k
f H
N N
r s
N N

=
| |
=
|
\
| |
=
|
\

i
i
R S
R

Thus,

( )
1/ 2
( | ) ( | )
j i
I f H f H = = R R

2
2 / 2
0 0
1
exp( )
2
K
j
I
N N
+
| |
=
|
\
i
R S R S

33
It is easy to show that
Substituting this we get
2
2 2
2 1
2
2 2
j
j j
+
+ = +
i
i i
S S
R S R S R S S
2
2
/ 2
0 0 0
2
1
exp exp
4
i j
N
i j
I
N N N
| |
+
|
| |

| |
|
|
=
|
|
|
\
|
\
|
\
s s
R
s s
34
Integration over all the dimensions we get
2
.
0
2
/ 2
0 0
( 1) max exp( )
4
2 1
exp( )
2
j
E i j
j
K
P M
N
d
N N

| |

|
\

i
i
S S
S S
R
R

Or
2
.
0
2
/ 2
1
0 0
( 1) max exp( )
4
2 1
exp( )
2
j
E i j
k jk
N
k
K
k
k
P M
N
s s
r
dr
N N

| |

|
\


i
i
S S

35
Since the inner integral is one!! We get a simple bound
2
.
0
( 1) max exp( )
4
j
E i j
P M
N


i
S S

36
The theorem of Irrelevance
Given a channel with pdf
The output of the channel is
composed of two sub
vectors
1 2
Pr [ | ]
( , | )
j
j
ob d
f Lim
d
< < +
=
R x R R S
R R S
x
An optimum receiver may disregard the
vector R
2
if and only if
2 1 2 1
( | , ) ( | )
j
f f = R S R R R
1 2 1 2 1 1 2 1
In general p(a,b)=p(a)p(b|a)
Thus,
( , | ) ( | ) ( | , ) ( | ) ( | )
j j j j
f f f f f = = R R S R S R S R R S R R
Prove
The last term is common to all hypotheses
and therefore can erased
Channel Receiver
1 2
( , | )
j
f R R S
1
R
2
R
{ }, 1,..,
j
j M = S
37
Examples
Receiver
1
R
{ }, 1,.., =
j
j M S
+
1
n
2 2
n =R
Case A:
Case B:
1
R
{ }, 1,..,
j
j M = S
1
n
2
n
1
R
2
, R
38
What about this case
1
R
{ }, 1,..,
j
j M = S
1
n
2
n
2
, R

Potrebbero piacerti anche