Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
X
() =
2
(1 +
2
)
2
R
X
() = R
X
() =
2 6
2
(1 +
2
)
3
.
By the formulas for
E and the resulting MSE,
E[X
|X
0
] =
R
X,X
()X
0
R
X
(0)
=
X
0
(1 +
2
)
2
MSE = R
X
(0)
R
X,X
()
2
R
X
(0)
= 1
2
2
(1 +
2
)
4
,
which is minimized at =
1
3
0.5777.
3 Prediction of future integral of a Gaussian Markov process
(a) Since J is the integral of a Gaussian process, J has a Gaussian distribution. It remains to nd
the mean and variance. E[J] =
_
0
e
t
E[X
t
]dt = 0.
Var(J) = E[J
2
] =
_
0
_
0
e
s
e
t
e
|st|
dsdt
= 2
_
0
_
t
0
e
s
e
t
e
(st)
dsdt
= 2
2
_
0
e
(+)t
_
t
0
e
()s
dsdt
=
2
2
_
0
e
(+)t
_
t
0
_
1 e
()t
_
dt
=
2
2
_
1
+
1
2
_
=
+
(The above gives the correct answer even if = ; can check directly or by continuity.)
(b) Since X
0
and J are jointly Gaussian, we can use the standard formulas for conditional expec-
tation and MSE. Both J and X
0
have mean zero,
Cov(J.X
0
) =
_
0
e
t
e
|t0|
dt =
+
,
and Var(X
0
) = R
X
(0) = 1. So E[J|X
0
] =
E[J|X
0
] =
X
0
+
and MSE =
+
_
+
_
2
=
(+)
2
.
2
4 A two-state stationary Markov process
(a) Since E[X
t
] = 0 for each t and since X
t
takes values in {1, 1}, the distribution of X
t
for any
t must be
i
= 0.5 for i {1, 1}. Solving the Kolmogorov forward equations (see Example 4.9.3)
yields the distribution of the process at any time t 0 for any given initial distribution:
(t) = (0)e
2t
+ (0.5, 0.5)(1 e
2t
)
So the transition probability functions are given by:
p
ij
() =
_
0.5(1 +e
2
) if i = j
0.5(1 e
2
) if i = j
so that for 0,
R
X
() = E[X()X(0)] =
i{1,1}
j{1,1}
P{X(0) = i, X() = j}ij =
i{1,1}
j{1,1}
i
p
i,j
()ij
=
1
4
_
(1 +e
2
) + (1 +e
2
) (1 e
2
) (1 e
2
)
= e
2
.
Hence for all , R
X
() = e
2||
.
(b) For all , because R
X
is continuous.
(c) For = 0, because R
X
is twice continuously dierentiable in a neighborhood of zero if and only
if = 0.
(d) For > 0, because lim
R
X
() = 0 for > 0 and lim
R
X
() = 0 if = 0.
5 Some Fourier series representations
(a) The coordinates of f are given by c
i
= (f,
i
) for i 1. Integrating, we nd c
1
=
T
2
. For
k 1, we use integration by parts to obtain: c
2k
= 0 and c
2k+1
=
2T
2k
.
(b) The best N eigenfunctions to use are the ones with the largest magnitude coordinates. Thus,
f
(N)
(t) =
T
2
1
(t)
N1
k=1
2T
2k
2k+1
(t).
We nd ||f||
2
=
_
T
0
|f(t)|
2
dt =
T
3
(and we can check that
i=1
c
2
i
=
T
3
too.) Now
||f f
(N)
||
2
=
k=N
c
2
2k+1
=
T
2
2
k=N
1
k
2
so
||f f
(N)
||
2
||f||
2
=
3
2
2
k=N
1
k
2
3
2
2
_
N1
1
x
2
dx =
3
2
2
(N 1)
0.01
if N 1 +
3
2
2
(0.01)
= 16.2, so N = 17 suces.
(c) Without loss of generality, we assume the parameter
2
of the Brownian motion is one. The
N-dimensional random process closest to W in the mean squared L
2
norm sense is obtained by
using the N terms of the KL expansion for N with the largest eigenvalues. Note E[||W||
2
] =
3
E[
_
T
0
W
2
t
dt] =
_
T
0
tdt =
T
2
2
. The eigenvalues for the KL expansion of W are given by
n
=
4T
2
(2n+1)
2
2
for n 0. Thus,
W
(N)
(t) =
N1
n=0
(W,
n
)
n
.
E[||W W
(N)
||
2
] =
n=N
n
=
4T
2
n=N
1
(2n + 1)
2
so
E[||W W
(N)
||
2
]
E[||W||
2
]
=
8
n=N
1
(2n + 1)
2
4
2
_
2N
1
x
2
dx =
2
2
N
0.01
if N
2
(0.01)
2
= 20.26, so N = 21 suces.
6 First order dierential equation driven by Gaussian white noise
(a) Since
N
0 it follows that
X
(t) = x
0
e
t
.The covariance function of X is given by:
C
X
(s, t) =
_
s
0
_
t
0
e
(su)
e
(tv)
2
(u v)dvdu
=
_
s
0
e
(su)
e
(tu)
2
du =
2
e
st
_
s
0
e
2u
du
=
2
2
_
e
st
e
st
_
By the symmetry of C
X
, it is given in general by
C
X
(s, t) =
2
2
_
e
|ts|
e
ts
_
(b) Let r < s < t. It must be checked that
C
X
(r, s)C
X
(s, t)
C
X
(s, s)
= C
X
(v, t)
or (e
rs
e
rs
)(e
st
e
st
) = (e
rt
e
rt
)(1 e
2s
) which is easily done.
(c) As t ,
X
(t) 0 and C
X
(t +, t)
2
2
e
||
.
4