Sei sulla pagina 1di 5

ECE 531 - Detection and Estimation Theory

Homework 5
February 22, 2011

7.18 (Luke Vercimak) Newton-Raphson






1 2
1
2
g(x) = exp x + 0.1 exp (x 10)
2
2




1 2
1
2

g (x) = exp x (x) + 0.1 exp (x 10) (10 x)


2
2








1 2
1 2
1
1
2
2

2
2
g (x) = exp x (x )exp x +0.1 exp (x 10) (10x) 0.1 exp (x 10)
2
2
2
2
The Newton-Raphson method finds the zeros of a function:
xk+1 = xk

f (xk )
f (xk )

We want to find the zeros of g . Therefore:


xk+1 = xk

f (xk )
f (xk )

Using a computer to compute:


x
g(x)
g (x)
k
0 0.5000 0.8825 0.4412
1 0.1667 0.9862 0.1644
2 0.0048 1.0000 0.0048
3 0.0000 1.0000 0.0000
4 0.0000 1.0000 0.0000

g (x)
0.6619
0.9588
1.0000
1.0000
1.0000

k
x
g(x)
g (x)
0 9.5000 0.0882 0.0441
1 10.1667 0.0986 0.0164
2 9.9952 0.1000 0.0005
3 10.0000 0.1000 0.0000

g (x)
0.0662
0.0959
0.1000
0.1000

It is important for the initial guess of this method to be closer to the critical point that we
wish to estimate. Otherwise it will converge to the closest maxima or minima to the initial
guess.
7.20 (Luke Vercimak) Given x[n] = s[n] + w[n], determine the MLE estimator of s[n]. (w[n]
N (0, 2 )) Since nothing is known about s[n], we cannot determine anything about s[n + k]
from x[n]. Since we cannot take advantage of any information about the relationship of the
1

values of s[n] the best we can do is assume that each x[n] is independent, giving us a worst
case estimate. (The joint distribution will not give any additional information over the single
distribution.)


1
ln p(x[n]) = ln
2 2 2 (x[n] s[n])2
2
Differentiating this and setting the result equal to 0, we obtain the following results:
s[n] = x[n]
This makes sense due to the fact that we dont have any more information about s[n] other
than x[n]. The measurement will have the following PDF: s[n] N (s[n], 2 )
1. Is the MLE asymptotically unbiased? The estimator doesnt improve with increasing N.
Therefore it is either biased or unbiased.
E[
s[n]] = E[x[n]] = E[s[n] + w[n]] = s[n] + E[w[n]] = s[n]
Therefore the estimator is unbiased.
2. Is the MLE asymptotically efficient? The estimator doesnt depend on N, therefore it is
either asymptotically efficient or not.
ln p(x[n])
1
= 2 (x[n] s[n]) = I()(g(x) )
s[n]

Therefore x[n] is a efficient estimator for s[n]


3. Is the MLE asymptotically gaussian? x[n] is gaussian because it is the sum of a constant
and a gaussian RV. Therefore the MLE in this case is gaussian.
4. Is the MLE asympotically consistent? The estimate does not converge as N 0. Instead
the variance stays the same. Therefore the estimate is not consistent.
8.5 (Luke Vercimak + Natasha Devroye) DCT Estimation
Given:
p
X
Ai cos 2fi n
s[n] =
i=1

Determine:

1. Find the LSE normal equations The model above is


the form s = H Therfore,

1
1

cos 2f1 (1)


cos 2f2 (1)

H=
..
..

.
.
cos 2f1 (N 1) cos 2f2 (N 1)

A1
A2

= .
..
Ap

a linear model and can be put into


...
...
..
.

1
cos 2fp (1)
..
.

. . . cos 2fp (N 1)

Per the books results, the normal equations would be:


HT H = HT s
2. Given that the frequencies are fi = i/N , explicitly find the LSE and the minimum LSE
error.

1
1
...
1

cos 2 N2 (1)
...
cos 2 Np (1)
cos 2 N1 (1)

H=

..
..
..
..

.
.
.
.
p
2
1
cos 2 N (N 1) cos 2 N (N 1) . . . cos 2 N (N 1)
The columns of this matrix are orthogonal. Because of this:
N

0 ... 0
1 0 ... 0
2
0 N ... 0 N 0 1 ... 0

2
HT H = .
.. .. .. ..
..
..
.. =
2
..

. . . .
.
.
.
N
0 0 ... 1
0 0 ... 2
Solving the normal equations for , we get

= (HT H)1 HT s
2
= IHT s
N
2
= HT s
N
Converting this back into scalar form results in the LSE estimator:
N 1
ni
2 Xh
s[n]
cos 2i
Ai =
N
N
n=0

To find the minimum LSE error, use the results found in eq 8.13:

Jmin = sT (s H)
= sT s sT H
= sT s sT H

2 T
H s
N

2 T
s HHT s
N
N
2
= sT s sT Is
N
2
T
T
=s ss s
= sT s

=0

Because the signal model was linear to begin with, the LSE gives exact estimates of the
parameters and is able to reconstruct the signal in its entirety.
3

3. Finally if x[n] = s[n] + w[n], where w[n] is WGN with variance 2 determine the PDF
of the LSE assuming the given frequencies.
Because of the above result for Jmin , any error in the estimate is entirely due to w[n].
The estimator wouldnt change in this case and would be:
N 1
ni
2 Xh
x[n]
cos 2i
Ai =
N
N

=
=

2
N
2
N

2
E[Ai ] =
N
=
=

2
N
2
N

= Ai

n=0
N
1 h
X
n=0
N
1 h
X
n=0
N
1 h
X
n=0
N
1 h
X
n=0
N
1 h
X

cos 2i
cos 2i

ni
(s[n] + w[n])
N

N 1
2 Xh
ni
ni
s[n] +
w[n]
cos 2i
N
N n=0
N

N 1
2 Xh
ni
ni
s[n] +
E[w[n]]
cos 2i
cos 2i
N
N
N

cos 2i
cos 2i

n=0

ni

s[n] +

ni

s[n]

2
N

n=0
N
1 h
X
n=0

cos 2i

ni
0
N

#
N
1 h
i
X
n
2
w[n]
cos 2i
var[Ai ] = var
N
N
n=0
#
"N 1
n
4 X
2
cos (2i )var[w[n]]
= 2
N
N
n=0
#
"N 1
4 2 X 1 + cos(2i Nn )
= 2
N
2
"

n=0

2 2
N

Furthermore, cov(Ai , Aj ) = ij . The estimate Ai is the sum of a constants


 signal
and a
22

number of gaussian RVs. Therefore the distribution of A is gaussian, N A,


I .
N

8.10 (Shu Wang) Prove k


sk2 + kx
sk2 = kxk2
Then we can have:
We suppose that s = H .

||
s||2 = sT s = T H T H
T (x H )
= xT x xT H T H T x + T H T H
||x s||2 = (x H )
||x||2 = xT x

||
s||2 + ||x s||2 = xT x + [T H T H xT H T H T x + T H T H ]

= xT x [xT H + T H T x 2T H T H ]

= xT x [(xT T H T )H + T H T (x H )]
T H + T H T (x H )]

= xT x [(x H )

T H = 0 and H T (x H )
= 0.
We know that (x H )
2
2
T
2
So ||
s|| + ||x s|| = x x = ||x||

Potrebbero piacerti anche