Sei sulla pagina 1di 10

INNER PRODUCT SPACES

To some extent vector spaces are generalizations of the classic two-dimensional and
three-dimensional Euclidean spaces. However, as you may recall, there is usually a notion
of length attached to these spaces. Bearing this in mind, one could ask whether a sim-
ilar notion exists for higher dimensions and what its relationship with the vector space
structure is. This will be the main topic of this Chapter.
1. Inner product spaces
Important Note: From now on, the underlying eld will be always either R or C. To
emphasize this we shall denote the scalar eld by K rather than F, as we have been doing
so far.
Denition 1.1 (Inner product space). An inner product space is a pair (V, , )), where
V is a vector space and , ) : V V K satises the following axioms:
i) u +v, w) = u, w) + v, w) (u, v, w V ).
ii) v, w) = v, w) ( K, v, w V ).
iii) v, w) = w, v) (v, w V ).
iv) v, v) > 0 (v V 0).
Remark 1.2. It follows from Denition 1.1 that
i

) u, v +w) = u, v) + u, w) (u, v, w V ); and


ii

) u, v) = u, v) ( K, u, v V ).
We prove (ii

) and leave the proof of (i

) to you as an exercise. So let u, v V and K


be arbitrary. Then, by (iii) and (ii),
u, v) = v, u) = v, u) = v, u) = u, v).
Remark 1.3. Axioms (i) and (i

) can be easily extended, by induction, to any nite number


of vectors, i.e., given vectors u
1
, . . . , u
n
and w in an inner product space V ,
_

n
i=1
u
i
, w
_
=

n
i=1
u
i
, w) and
_
w,

n
i=1
u
i
_
=

n
i=1
w, u
i
).
Examples:
(K
n
, , )), where v, w) :=

n
i=1
v
i
w
i
(v, w K
n
). In what follows we shall refer
to it as the inner product space K
n
. The space (R
n
, , )) is also called the
n-dimensional Euclidean space.
To see that (K
n
, , )) is an inner product space we need to verify the axioms
of Denition 1.1. To this aim, let u = (u
1
, . . . , u
n
), v = (v
1
, . . . , v
n
) and w =
(w
1
, . . . , w
n
) in K
n
be arbitrary. Then
1
2 INNER PRODUCT SPACES
i) u +v, w) =

(u
1
+v
1
, . . . , u
n
+v
n
), (w
1
, . . . , w
n
)
_
=

n
i=1
(u
i
+v
i
)w
i
=

n
i=1
u
i
w
i
+

n
i=1
v
i
w
i
= u, w) + v, w).
ii) v, w) =

(v
1
, . . . , v
n
), (w
1
, . . . , w
n
)
_
=

(v
1
, . . . ,
n
v
n
), (w
1
, . . . , w
n
)
_
=

n
i=1
(v
i
)w
i
=

n
i=1
v
i
w
i
= v, w).
iii) v, w) =

n
i=1
v
i
w
i
=

n
i=1
v
i
w
i
=

n
i=1
v
i
w
i
= w, v).
iv) v, v) =

n
i=1
v
i
v
i
=

n
i=1
[v
i
[
2
> 0 provided v ,= 0.
More generally, let A M
n
(K) invertible, and let , ) be as in the previous
example. Then , )
A
: K
n
K
n
K, (v, w) Av, Aw), satises all the axioms
of an inner product and (K
n
, , )
A
) is an inner product space.
The subspace of K
N
formed by all sequences, (x
i
), with only nitely many non-zero
terms, endowed with the inner product (x
i
), (y
i
)) :=

i=1
x
i
y
i
. We shall denote
it by FS.
The vector space C
K
[0, 1] with the inner product dened by f, g) :=
_
1
0
f(t)g(t) dt
(f, g C
K
[0, 1]).
Let V be an inner product space. Mimicking the denitions of length of a vector in R
2
and R
3
we dene the length of a vector v V , denoted |v|, by
|v| :=
_
v, v).
Example: Let v = (x, y, z) R
3
. Then |v| =
_

(x, y, z), (x, y, z)


_
=
_
x
2
+y
2
+z
2
.
Proposition 1.4. Let V and W be inner product spaces. Then for a linear map T : V
W the following are equivalent:
i) Tv, Tw) = v, w) (v, w V ).
ii) |Tv| = |v| (v V ).
Proof. We give the proof for K = C. The case where K = R is simpler.
Suppose (i) holds, and let v V arbitrary. Then |Tv|
2
= Tv, Tv) = v, v) = |v|
2
.
So, (i) (ii).
Conversely, let v, w V and suppose (ii) holds. Then
|T(v +w)|
2
=

T(v +w), T(v +w)
_
=

Tv +Tw, Tv +Tw
_
=

Tv, Tv
_
+

Tv, Tw
_
+

Tw, Tv
_
+

Tw, Tw
_
= Tv, Tv) + Tv, Tw) + Tv, Tw) + Tw, Tw)
= |Tv|
2
+ 2ReTv, Tw) + |Tw|
2
,
INNER PRODUCT SPACES 3
and likewise,
|v +w|
2
= |v|
2
+ 2Rev, w) + |w|
2
.
By (ii), |T(v+w)|
2
= |v+w|
2
|Tv|
2
+2ReTv, Tw)+|Tw|
2
= |v|
2
+2Rev, w)+
|w|
2
ReTv, Tw) = Rev, w) (again by (ii)).
Applying the same argument to the vectors v and iw we nd that ReTv, T(iw)) =
Rev, iw) ImTv, Tw) = Imv, w). Combining the latter and the previous identities
one obtains Tv, Tw) = v, w), as needed.
Of course, the relevant maps in the setting of inner product spaces will be precisely
those linear maps which, in addition, respect the inner product structure. As is seen from
Proposition 1.4, these are precisely the maps that respect the metric structure, i.e., those
that preserve lengths.
Denition 1.5. Let V and W be inner product spaces. A linear map T : V W that
satises |Tv| = |v| (v V ) is called a linear isometry. A bijective linear isometry is
called an isometric isomorphism. Two inner product spaces are said to be isometri-
cally isomorphic if there exists an isometric isomorphism between them.
Examples:
The map : R
2
R
2
, (x, y) (y, x), is an isometric isomorphism from the
Euclidean space R
2
to itself. Indeed, let (x, y) R
2
be arbitrary. Then
_
_
(x, y)
_
_
=
_
_
(y, x)
_
_
=
_
(y, x), (y, x))
=
_
y
2
+ (x)
2
=
_
x
2
+y
2
=
_
(x, y), (x, y)) =
_
_
(x, y)
_
_
.
We leave to you, as an exercise, to verify that is a linear isomorphism.
The map R : FS FS, (x
1
, x
2
, . . .) (0, x
1
, x
2
, . . .), is a linear isometry but is
not an isomorphism. (Why?)
The map T : C
C
[0, 1] C
C
[0, 1], f(t) e
it
f(t), is an isometric isomorphism.
Theorem 1.6 (CauchySchwarz inequality). Let V be an inner product space. Then

v, w)

|v||w| (v, w V ).
Proof. Let v, w V arbitrary, and let K so that [[ = 1 and v, w) = [v, w)[. Then,
for every t R, one has that
0

v +tw, v +tw
_
= v, v) + v, tw) + tw, v) + tw, tw)
= v, v) +tv, w) +tw, v) +t
2
w, w) = |v|
2
+ 2

v, w)

t + |w|
2
t
2
.
One concludes from this last that 4[v, w)[
2
4|v|
2
|w|
2
0, i.e., [v, w)[ |v||w|, as
needed.
Theorem 1.7. Let V be an inner product space. The function | | : V [0, [ satises
the following properties:
i) |v| = 0 v = 0. (Denite positive)
ii) |v| = [[|v| (v V, K). (Positive homogeneous)
iii) |v +w| |v| + |w| (v, w V ). (Triangle inequality)
4 INNER PRODUCT SPACES
Proof. We prove (iii) and leave (i) and (ii) as exercise.
Let v, w V arbitrary. Then
|v +w|
2
= v +w, v +w) = v, v) + v, w) + w, v) + w, w)
|v|
2
+

v, w)

w, v)

+ |w|
2
|v|
2
+ 2|v||w| + |w|
2
= (|v| + |w|)
2
,
where the penultimate inequality is direct consequence of the CauchySchwarz inequality.
Taking square roots one obtains the desired result.
2. Orthonormal systems
If you were asked to choose a coordinate system, either in the plane or in the three-
dimensional real space, most likely you would choose a coordinate system where the edges
are mutually perpendicular. From your own experience, distances and lengths, in one such
system, are usually easier to handle. Carrying this idea to the setting of inner product
spaces leads to the following notion.
Denition 2.1 (Orthogonality). Two vectors v and w in an inner product space are said
to be orthogonal if v, w) = 0. If, in addition, |v| = |w| = 1 then they are said to be
orthonormal. A set of mutually orthogonal vectors in an inner product space is a set
in which any two vectors are orthogonal. A set of mutually orthogonal vectors of length 1
is called and orthonormal set.
Examples:
The standard basis of K
n
, i.e., the basis e
1
, . . . , e
n
is an orthonormal set.
The trigonometric system,
_
1, sin, cos , . . . , cos(n), sin(n), . . .
_
, is a set of mu-
tually orthogonal vectors in C
R
[0, 2] with respect to the inner product dened by
f, g) :=
_
2
0
f(t)g(t) dt.
This can be seen as follows. Set I
m,n
= sin(m), cos(n)) (m, n N), and
set J
m,n
= cos(m), cos(n)) and K
m,n
= sin(m), sin(n)) (m, n N, m ,=
n). Then one easily checks that I
m,n
+ I
n,m
=
_
2
0
sin((m + n)) d = 0 and
I
m,n
I
n,m
=
_
2
0
sin((m n)) d = 0, so I
m,n
= I
n,m
= 0. Likewise, J
m,n
+
K
n,m
=
_
2
0
cos((mn)) d = 0 and J
m,n
K
n,m
=
_
2
0
cos((m+n)) d = 0, so
J
m,n
= K
n,m
= 0.
Theorem 2.2 (Pythagoras). Let v
1
, . . . , v
n
be mutually orthogonal vectors in an inner
product space. Then
_
_
v
1
+v
2
+ +v
n
_
_
2
= |v
1
|
2
+ |v
2
|
2
+ + |v
n
|
2
.
Proof.
_
_
_

n
i=1
v
i
_
_
_
2
=
_

n
i=1
v
i
,

n
j=1
v
j
_
=

n
i=1

n
j=1
v
i
, v
j
)
=

n
i=1
v
i
, v
i
) =

n
i=1
|v
i
|
2
.

Theorem 2.3. Let v


1
, v
2
, . . . , v
m
be non-zero, mutually orthogonal vectors in an inner
product space. Then they are linearly independent.
INNER PRODUCT SPACES 5
Proof. Let
1
, . . . ,
m
be scalars such that

m
i=1

i
v
i
= 0. Then 0 =

m
i=1

i
v
i
, v
j
) =

j
v
j
, v
j
) =
j
|v
j
|
2
(1 j m). It follows that
j
= 0 (1 j m), as needed.
How can we produce sets of orthogonal vectors on an inner product space?
Theorem 2.4 (Gram-Schmidt orthogonalization process). Let x
1
, . . . , x
n
be linearly
independent vectors in an inner product space V . Then there are orthonormal vectors
v
1
, . . . , v
n
in V such that v
1
, . . . , v
m
) = x
1
, . . . , x
m
) (1 m n).
Proof. The proof is essentially an algorithm to produce a sequence of orthonormal vectors
starting with any linearly independent set.
Set v
1
= x
1
/|x
1
|, and in general, if v
1
, . . . , v
k
have been constructed with the desired
properties then compute the orthogonal projection, P
k
x
k+1
:=

k
i=1
x
k+1
, v
i
)v
i
, of x
k+1
onto v
1
, . . . , v
k
), and set v
k+1
= (x
k+1
P
k
x
k+1
)/|x
k+1
P
k
x
k+1
|. One easily checks
that v
k+1
v
1
, . . . , v
k
). Clearly, v
k+1
v
1
, . . . , v
k
, x
k+1
) = x
1
, . . . , x
k
, x
k+1
), and
x
k+1
= P
k
x
k+1
+ |x
k+1
P
k
x
k+1
|v
k+1
v
1
, . . . , v
k
, v
k+1
).
Example: Show that v
1
=
1

6
(2, 1, 1) and v
2
=
1

3
(1, 1, 1) form an orthonormal set in
the Euclidean space R
3
, and extend it to an orthonormal basis of R
3
.
First, one needs to check that v
1
, v
2
) = 0 and |v
1
| = |v
2
| = 1. The computations
follow:
v
1
, v
2
) =
_
1

6
(2, 1, 1),
1

3
(1, 1, 1)
_
=
2

6
1

3
+
1

6
1

3
+
1

6
1

3
= 0;
|v
1
| =
_
v
1
, v
1
) =
_
1

6
(2, 1, 1),
1

6
(2, 1, 1)
_1
2
=
_
22+11+11
6
_1
2
= 1; and
|v
2
| =
_
v
2
, v
2
) =
_
1

3
(1, 1, 1),
1

3
(1, 1, 1)
_1
2
=
_
11+(1)(1)+(1)(1)
3
_1
2
= 1.
Next, there are several ways of nding the third vector to form an orthonormal basis.
Here is one of them. (For a dierent way, see the solution to Exercise 2 from Tutorial 9.)
First we extend v
1
, v
2
to a basis of R
3
. For this, note that the matrix
_
2

6
1

6
1

6
1

3

1

3

1

3
_
,
can be reduced by means of elementary row transformations to
_
2

6
1

6
1

6
0

3
2

3
2
_
.
So, if x
3
:= (0, 0, 1), then v
1
, v
2
, x
3
is a basis for R
3
. We then apply the Gram-Schmidt
orthogonalization process to this basis. As v
1
and v
2
are already orthonormal we can start
directly by constructing v
3
. We do this in two steps. First, we let
v
3
= x
3
x
3
, v
1
)v
1
x
3
, v
2
)v
2
= x
3

1

6
v
1
+
1

3
v
2
=
1
2
(0, 1, 1).
Then we normalize v
3
, i.e., we dene v
3
:= v
3
/| v
3
| =
1

2
(0, 1, 1).
Corollary 2.5. Every nite-dimensional inner product space has an orthonormal basis.
Proof. Every nite-dimensional inner product space has a basis. Apply the Gram-Schmidt
orthogonalization process to it to obtain an orthonormal one.
6 INNER PRODUCT SPACES
Corollary 2.6. Up to isometric isomorphism, there is for each dimension, n, only one
inner product space over K, the inner product space K
n
.
Proof. Let V be an inner product space over K of dimension n, and let v
1
, . . . , v
n
be
an orthonormal basis for it. There exists a unique linear isomorphism T : K
n
V , so
that Te
i
= v
i
(1 i n). Moreover, by Pythagoras, |T(

n
i=1

i
e
i
)|
2
= |

n
i=1

i
v
i
|
2
=

n
i=1
[
i
[
2
= |

n
i=1

i
e
i
|
2
for any set of scalars, so T is also and isometry.
From now on we shall consider only orthonormal bases.
3. Diagonalization (complex case)
Let us consider, once again, the problem of when is a linear map representable by a
diagonal matrix. We shall see that, in the current setting, it is possible to go a bit further
and characterize those linear maps for which a diagonal representation with respect to an
orthonormal basis exists. In this section we shall deal with linear maps on complex inner
product spaces. (The real case is treated similarly. We shall outline the main dierences
in the next section.)
Let us start by introducing a very important concept.
Let V and W be nite-dimensional inner product spaces, and let T L(V, W). Then
(one can show that) there exists a unique linear map T

L(W, V ), called the adjoint of


T, such that
Tv, w) = v, T

w) (v V, w W).
Examples:
Let V be a nite-dimensional inner product space, and let id L(V ). Then
id

= id, for id(v), v) = v, id(v)) (v V ).


In the inner product space C
2
, consider the map T : C
2
C
2
, (x, y) (y, x).
Then T

: C
2
C
2
, (u, v) (v, u). Indeed,
T(x, y), (u, v)) = (y, x), (u, v)) = yu +xv
= (x, y), (v, u)) = (x, y), T

(u, v)).
Proposition 3.1 (Properties of the adjoint). The adjoint map, : L(V, W) L(W, V ),
has the following properties:
i) (T

= T.
ii) (T
1
+T
2
)

= T

1
+T

2
.
iii) (T)

= T

( K).
iv) (ST)

= T

whenever the composition makes sense.


Proof. Tutorial.
Denition 3.2. Let V be an inner product space over C. A map T L(V ) is called
normal if T

T = TT

.
Hermitian or selfadjoint if T

= T.
unitary if T

T = id = TT

.
Remark 3.3. Note that every hermitian (resp. unitary) map is normal.
INNER PRODUCT SPACES 7
Examples:
Every rotation of the Euclidean spaces R
2
and R
3
around the origin is a unitary
map. Indeed, by Proposition 1.4 and Denition 3.2, T is unitary T is an
isometric isomorphism of the space onto itself. (Can you explain this?)
Let C

R
[0, 1] be the subspace of C
R
[0, 1] formed by all functions f with continuous
derivatives of all orders on [0, 1], and such that f(0) = f(1) = 0. The linear map
T : C

R
[0, 1] C

R
[0, 1], f f

, is selfadjoint, for
Tf, g) =
_
1
0
f

g = f

1
0

_
1
0
f

= fg

1
0
+
_
1
0
fg

= f, Tg).
Denition 3.4. We shall call a matrix A M
n
(C)
normal if A

A = AA

.
Hermitian or selfadjoint if A

= A.
unitary if A

A = I = AA

.
Example: Let
A =
_
_
1 i 0
0 1 i
i 0 1
_
_
.
Then
A

A =
_
_
1 0 i
i 1 0
0 i 1
_
_
_
_
1 i 0
0 1 i
i 0 1
_
_
=
_
_
2 i i
i 2 i
i i 2
_
_
=
_
_
1 i 0
0 1 i
i 0 1
_
_
_
_
1 0 i
i 1 0
0 i 1
_
_
= AA

.
So, A is normal. Note that A is neither Hermitian nor unitary.
The next result tells us that Denitions 3.2 and 3.4 are in fact compatible with each
other.
Theorem 3.5. Let V be a nite-dimensional inner product space over C and let and
be orthonormal bases of it. Let T L(V ) be a linear map. Then
1)
,
(T

) =
,
(T)

.
2) there exists a unitary matrix, U say, such that

(T) = U

(T)U.
3) T is normal if and only if

(T) is normal.
4) T is Hermitian if and only if

(T) is Hermitian.
5) T is unitary if and only if

(T) is unitary.
Proof. 1) Let = v
1
, . . . , v
n
and = w
1
, . . . , w
n
. Set
,
(T) = (a
ij
) and
,
(T

) =
(b
ij
). By the denition of T

and the orthonormality of and , one has that


a
ji
= Tv
i
, w
j
) = v
i
, T

(w
j
)) = T

(w
j
), v
i
) = b
ij
(1 i, j n)
(Why?), or equivalently, b
ij
= a
ji
(1 i, j n). This proves (1).
8 INNER PRODUCT SPACES
2) We know that

(T) =
,
(id)

(T)
,
(id) and
,
(id)
,
(id) =

(id) = I =

(id) =
,
(id)
,
(id), and by the previous part,
,
(id) =
,
(id

) =
,
(id)

. The
desired result follows.
3) T

T = TT

(T

T) =

(TT

(T

(T) =

(T)

(T

(T)

(T) =

(T)

(T)

.
The proofs of (4) and (5) are similar to that of (3).
Example: Is f : C
2
C
2
, (x, y) (x, x) normal? Justify your answer.
The matrix representation of f with respect to the unit vector basis is

1 0
1 0

, which is
not a normal matrix. So, f is not normal.
Example: Find the adjoint of the linear map f : R
2
R
2
, (x, y) (0, x).
The matrix representation of f with respect to the unit vector basis is

0 0
1 0

. Its
adjoint is the matrix

0 1
0 0

, which is the matrix representation of f

with respect to the


unit vector basis. Thus, f

: R
2
R
2
, (x, y) (y, 0).
Denition 3.6 (Unitary equivalence). We shall call matrices A and B in M
n
(C) unitary
equivalent if there is a unitary matrix, U, such that A = U

BU.
Theorem 3.7 (Spectral theorem). Let V be a nite-dimensional inner product space over
C and let T L(V ). Then there exists an orthonormal basis so that

(T) is diagonal
if and only if T is normal.
Proof. Suppose there exists an orthonormal basis so that

(T) is diagonal. Then

(T)

(T) =

(T)

(T)

, i.e.,

(T) is a normal matrix, and by Theorem 3.5 (part


(3)), T must be normal.
Conversely, let T be a normal map and let n = dimV . Choose a norm-one eigenvector v
1
for T (which is possible because C is algebraically closed) and then choose v
2
, . . . , v
n
V
inductively as follows: if for some k < n, v
1
, . . . , v
k
V have already been chosen so
that Tv
i
v
1
, . . . , v
i
) (1 i k) then let P
k
be the projection along v
1
, . . . , v
k
)
onto W
k
:= v
1
, . . . , v
k
)

, dene T
k
L(W
k
) by T
k
(v) := P
k
T(v)
_
v W
k
) and choose
v
k+1
W
k
to be a norm-one eigenvector for T
k
. Note that then
Tv
k+1
= (id P
k
)Tv
k+1
. .
v
1
,...,v
k

+P
k
Tv
k+1
. .
v
k+1

v
1
, . . . , v
k+1
).
Let = v
1
, . . . , v
n
. Then is clearly orthonormal and

(T) is upper triangular.


But, as T is normal,

(T)

(T) =

(T)

(T)

, and so (see Appendix),

(T) must
be diagonal.
Corollary 3.8. If A M
n
(R) is normal then there exist a diagonal matrix D and an
unitary matrix U, such that A = U

DU.
Example: Let
A =
_
_
0 1 0
1 0 1
0 1 0
_
_
.
Determine the eigenvalues and eigenvectors of A, and nd a unitary matrix U such that
U

AU is diagonal. (Note that A is Hermitian, and hence normal. So, such an U exists.)
INNER PRODUCT SPACES 9
We have that
p
A
(t) = det
_
_
t 1 0
1 t 1
0 1 t
_
_
= t det
_
t 1
1 t
_
det
_
1 0
1 t
_
= t(t
2
1) +t = t(

2 t)(

2 +t).
So, the eigenvalues of A are 0,

2 and

2. To nd a basis of eigenvectors we need to


solve the systems of linear equations:
i) Ax = 0, ii) Ax =

2x and iii) Ax =

2x.
Their general solutions are given by
i)
_
(x, y, z) R
3
: y = 0 and x +z = 0
_
=
_
(t, 0, t) : t R
_
,
ii)
_
(x, y, z) R
3
:

2x = y and y =

2z
_
=
_
(t, t

2, t) : t R
_
and
iii)
_
(x, y, z) R
3
:

2x = y and y =

2z
_
=
_
(t, t

2, t) : t R
_
,
respectively. One thus can take as a basis of eigenvectors x
1
= (1, 0, 1), x
2
= (1,

2, 1)
and x
3
= (1,

2, 1). One veries that x


1
, x
2
and x
3
are mutually orthogonal. Nor-
malizing, one arrives at the orthonormal basis v
1
, v
2
, v
3
, where v
1
=
1

2
(1, 0, 1),
v
2
=
1
2
(1,

2, 1) and v
3
=
1
2
(1,

2, 1). Dene
U = (v
1
v
2
v
3
) =
_
_
_
1

2
1
2
1
2
0
1

2

1

2
1
2
1
2
_
_
_
and D =
_
_
0 0 0
0

2 0
0 0

2
_
_
.
Then U is unitary and AU = UD, or equivalently, U

AU = D.
Corollary 3.9 (Spectral characterization of Hermitian and unitary maps). Let V be a
nite-dimensional inner product space over C and let T L(V ) normal. Then T is
i) Hermitian if and only if its spectrum lies on the real line.
ii) unitary if and only if its spectrum lies on the unit circle.
Proof. i) One direction: Tv = v |v|
2
= Tv, v) = v, T

v) = v, Tv) = |v|
2

R.
ii) One direction: Tv = v [[
2
|v|
2
= Tv, Tv) = T

Tv, v) = v, v) = |v|
2

[[ = 1.
4. Diagonalization (real case)
In the real case, is common practice to replace the terms unitary and Hermitian by
orthogonal and symmetric, respectively.
Theorem 4.1 (Spectral theorem). Let V be a nite-dimensional inner product space over
R and let T L(V ). Then there exists an orthonormal basis so that

(T) is diagonal
if and only if T is symmetric.
Corollary 4.2. If A M
n
(R) is symmetric then there exist a diagonal matrix D and an
orthogonal matrix Q, such that A = Q
t
DQ.
10 INNER PRODUCT SPACES
5. One last remark
Given a matrix A M
n
(R) one may be asked to nd
1) a diagonal matrix D and an invertible matrix Q, such that A = Q
1
DQ;
2) a diagonal matrix D and a unitary matrix U, such that A = U

DU; or
3) a diagonal matrix D and an orthogonal matrix P, such that A = P
t
DP.
It is clear that a positive solution to (3) a positive solution to (2) a positive
solution to (1).
None of the opposite implications hold. For instance, problem (2) has a positive solution
for the matrix
A :=
_
0 1
1 0
_
,
but problem (3) does not.
Likewise, problem (1) admits a positive solution for the matrix
A :=
_
1 1
0 2
_
,
for
A =
_
1 1
0 1
_
1
_
1 0
0 2
__
1 1
0 1
_
,
but problems (2) and (3) do not, since A is neither normal nor symmetric.
Appendix
Here we show that a normal upper triangular matrix must be diagonal (this result
was used in the proof of the Spectral Theorem). For this, let A = (a
ij
) M
n
(C) be
normal and upper triangular. From the denition of matrix multiplication and the identity
AA

= A

A, one readily obtains that

n
i=1
[a
1i
[
2
= [a
11
[
2
a
12
= a
13
= = a
1n
= 0.
Suppose, it has been shown that a
ll+1
= a
ll+2
= = a
ln
= 0 for every l 1, . . . , k 1,
where k n. Then

n
i=k
[a
ki
[
2
=

k
i=1
[a
ik
[
2
= [a
kk
[
2
a
kk+1
= a
kk+2
= = a
kn
= 0.
Continuing in this way one arrives at the desired result.

Potrebbero piacerti anche