Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Bernd Funovits
November 4, 2016
A=
(a) Calculate
JA0
(b) Calculate
AJ
and
and
1
2
3
3
, B= 0
1
4
0
1
0 8
0
6 3 , J = 0
1 0
1
of
AB
1
0
0
JB
BJ
(c) How can one interpret the multiplication with the matrix
(d) Calculate
0
1
0
in a) and b)?
and write each line (column) of this matrix as a linear combination of the lines of
(columns
A).
(e) Compute
(f ) For
(AB)
C = (1, 2, 3)
and
B 0 A0
calculate
(AC)
and
C 0 A0
A=
and check that
1
0
, B=
2
1
1
4
1
1
holds.
which satises
AA0 = A0 A
(which holds
A = QDQ0
where
have length
1.
is an orthogonal matrix, i.e. its columns are orthogonal to each other and
A=
and
2 ,
0
= 0.
1
A,
3
.
1
1
det A
0
1
3
1 1
3
3
1 1
x
0
=
y
0
A=
1
2
1
2
1
2
12
!
4
0
0
2
1
2
1
2
1
2
12
x, y RN
that
hx, yi = x0 y
where the prime denotes the transpose of a matrix. Show that
(a)
hx, yi = hy, xi
(b)
a, b R and x, y, z RN
p
kxk = hx, xi. Show that |hx, yi| kxk kyk .
for
kx + yk kxk + kyk
x, y RN
hx, yi = 0,
then
kx + yk = kxk + kyk
5. Show that
PN
i=1
xi x0i = X 0 X
where
xi RK1
and
x01
..
.
0
N K
.
X=
xi R
..
.
x0N
1
:
v
=
X
. Similary, we introduced
which
MX = IN X (X 0 X)
X0
above.
(a) Verify that
PX
and idempotent
(b) Verify that
(c) Dene
MX are indeed
(PX PX = PX ).
and
P X MX = 0
y = PX y
and that
MX PX = 0.
e = MX y
(the residual).
PN
y RN 1 , i.e. the scalar product hy, yi = y 0 y = i=1 yi2 ,
0
0
written as hy, yi = y PX y + y MX y = h
y , yi + he, ei by using the properties of PX and MX
0
N 1
0 1 0
Show that for = (1, . . . 1) R
, IN ( )
is a projection and interprete it.
i. Show that the square of the length of
(d)
0
= PX
)
can be
We know that orthogonal projections are symmetric and thus have an eigenvalue
P = V DV 0
D Rnn is a diagonal
0
0
matrix, i.e. V V = V V = In .
where
to an eigenvector
PP = P,
A Rnn
P v = v .
V Rnn is an
P are either zero
and
v Rn1 , v 6= 0,
orthogonal
or one.
tr (A) =
n
X
A,
i.e.
aii .
i=1
(a) Cyclic permutation invariance of trace: Verify that for two matrices
tr(DC)
C R23
and
D R32
tr(CD) =
holds. (of course this property holds more general for all matrices of appropriate dimensions)
A, B Rnn
tr(A) = tr(A)
tr (A + B) = tr (A) + tr (B)
A Rnn
A = V DV 0
where
D Rnn
V V = V V = In .
and
V Rnn
is an
Show that
tr (A) =
n
X
dii .
i=1
i. Hint: Use that
tr(AB) = tr(BA)
ii. Comment: This property is used very often in the lecture. For example, we use it to obtain
Pn
2
i=1 ei
E (e0 e) =
(d) Trace of orthogonal projection equals its rank: Use the preceding exercise to show that for an orthogonal
projection
the equation
tr(P ) = rank(P )
holds
i. Hint: Remember that the eigenvalues of orthogonal projections are either zero or one.
ii. Comments: This property is used extensively when we derive the degrees of freedom for
(for obtaining test statistics) and to obtain an unbiased estimator for the variance
9. Positive-(semi)denite matrices: A symmetric matrix
the inequality
v Av > 0
distributions
in its eigenvalue decomposition, assume that there is one negative eigenvalue and
v 0 Av 0
0
S = y X
y X
orthogonality arguments.
the term
h
i
y X = y Xb X Xb
and use
and
Let
and
and
be
m-
and
n-dimensional
dimensions. The covariance and the variance of random vectors are dened as
h
i
0
C (x, y) = E (x E(x)) (y E(y)) Rmn
V (x) = E [(x E(x)) (x E(x))] Rnn
Repeat the following:
(a) Linearity of expectations: Show that
i.
ii.
iii.
i.
ii.
(c) If
2.
and
C (x, y) = 0,
then
V (x + y) = V(x) + V(y)
Repeat the following denitions and facts (for example from Gourieroux,
A, B
P (A B) = P (A) P(B).
x and y are independent if for all events A, B in a probability space P ([x A] [y B]) =
P ([x A]) P ([y B]).
(c) If
and
g(x)
(d) If
and
and
h(y)
g ()
and
h ().
and
h ()
E () = 0
and that 2) if
and
are independent,
E () = 0
implies
E (|X) = 0.
= 0
implies
about conditional expectations in order to understand these statements. Without giving a precise denition of a
a,
E (a + AX + BY |Z) = a + AE (X|Z) + BE (Y |Z)
(b) If
and
E (y|x) = 0,
then
g ()
matrices
A, B
X, Y, Z
we have
E (|X) = E ().
E [E (|X)] = E ()
E (y g(x)) = 0
g()