Sei sulla pagina 1di 4

This version: 06/01/2004

Appendix A
Linear algebra
The study of the geometry of Lagrangian mechanics requires that one be familiar with
basic concepts in abstract linear algebra. The reader is expected to have encountered these
concepts before, so this appendix serves as a refresher. We also use our discussion of linear
algebra as our in to talking about the summation convention is a systematic manner.
Since this gets used a lot, the reader may wish to take the opportunity to become familiar
with it.
A.1 Vector spaces
We shall suppose the reader to be familiar with the notion of a vector space V over a eld
K, particularly the eld R of real numbers, or the eld C of complex numbers. On such a
vector space, one has dened the notions of vector addition, v1 +v2 V , between elements
v1, v2 V , and the notion of scalar multiplication, av V , for a scalar a K and v V .
There is a distinguished zero vector 0 V with the property that 0 + v = v + 0 = v for
each v V . For the remainder of the section we take K R, C.
A subset U V of a vector space is a subspace if U is closed under the operations of
vector addition and scalar multiplication. If V1 and V2 are vector spaces, the direct sum of
V1 and V2 is the vector space whose set is V1 V2 (the Cartesian product), and with vector
addition dened by (u1, u2) + (v1, v2) = (u1 + v1, u2 + v2) and scalar multiplication dened
by a(v1, v2) = (av1, av2). If U1 and U2 are subspaces of V we shall also write V = U1 U2
if U1 U2 = 0 and if every vector v V can be written as v = u1 + u2 for some u1 U1
and u2 U2.
A collection v1, . . . , vk of vectors is linearly independent if the equality
c1v1 + + ckvk = 0
holds only when c1 = = ck = 0. A set of vectors v1, . . . , vk generates a vector space
V if every vector v V can be written as
v = c1v1 + +ckvk
for some choice of constants c1, . . . , ck K. A basis for a vector space V is a collection of
vectors which is linearly independent and which generates V . The number of vectors in a
basis we call the dimension of V , and this is readily shown to be independent of choice of
basis. A vector space is nite-dimensional if it possesses a basis with a nite number of
elements. If e1, . . . , en is a basis for V , we can write
v = v
1
e1 + +v
n
en
236 A Linear algebra 06/01/2004
for some unique choice of v
1
, . . . , v
n
K, called the components of v relative to the basis.
Here we begin to adopt the convention that components of vectors be written with index
up. Let us use this chance to introduce the summation convention we shall employ.
A.1.1 Basic premise of summation convention Whenever one sees a repeated index, one as a
subscript and the other as a superscript, summation is implied.
Thus, for example, we have
v
i
ei =
n

i=1
v
i
ei,
as summation over i is implied.
A map A: U V between is k-linear if A(au) = aA(u) and if A(u1 + u2) = A(u1) +
A(u2) for each a k and u, u1, u2 U. The linear map idV : V V dened by idV (v) = v,
v V , is called the identity map for V . If f1, . . . , fm is a basis for U and e1, . . . , en
is a basis for V , for each i 1, . . . , m we may write
A(fa) = A
1
a
e1 + +A
n
a
en
for some unique choice of constants A
1
a
, . . . , A
n
a
K. By letting a run from 1 to m we thus
dene nm constants A
i
a
K, i = 1, . . . , n, a = 1, . . . , m, which we call the matrix of A
relative to the two bases. If u U is written as
u = u
1
f1 + +u
m
fm,
one readily ascertains that
A(u) =
n

i=1
m

a=1
A
i
a
u
a
ei.
Thus the components of A(u) are written using the summation convention as A
1
a
u
a
, . . . , A
n
a
u
a
.
Let us say a few more things about our summation convention.
A.1.2 More properties of the summation convention Therefore, in our usual notion of ma-
trix/vector multiplication, this renders the up index for A the row index, and the down
index the column index. Note that we can also compactly write
n

i=1
m

a=1
A
i
a
u
a
ei = A
i
a
u
a
ei.
The set of linear maps from a vector space U to a vector space V is itself a vector space
which we denote L(U; V ). Vector addition in L(U; V ) is given by
(A + B)(u) = A(u) + B(u),
and scalar multiplication is dened by
(aA)(u) = a(A(u)).
Note that what is being dened in these two equations is A+B L(U; V ) in the rst case,
and aA L(U; V ) in the second case. One veries that dim(L(U; V )) = dim(U) dim(V ).
06/01/2004 A.2 Dual spaces 237
Given a linear map A: U V , the kernel of A is the subspace
ker(A) = u U [ A(u) = 0
of U, and the image of A is the subspace
image(A) = A(u) [ u U
of V . The rank of A is dened to be rank(A) = dim(image(A)). The rank-nullity
formula says that dim(ker(A)) + rank(A) = dim(U).
Of special interest are linear maps from a vector space V to itself: A: V V . In this
case, an eigenvalue for A is an element K with the property that A(v) = v for
some nonzero vector v, called an eigenvector for . To compute eigenvalues, one nds
the roots of the characteristic polynomial det(idV A) which has degree equal to the
dimension of V . If K = C this polynomial is guaranteed to have dim(V ) solutions, but
it is possible that some of these will be repeated roots of characteristic polynomial. If
det(idV A) = ( 0)
k
P() for a polynomial P() having the property that P(0) ,= 0,
then the eigenvalue 0 has algebraic multiplicity k. The eigenvectors for an eigenvalue
0 are nonzero vectors from the subspace
W0
= v V [ (A 0 idV )(v) = 0 .
The geometric multiplicity of an eigenvalue 0 is dim(W0
). We let ma(0) denote the
algebraic multiplicity and mg(0) denote the geometric multiplicity of 0. It is always the
case that ma(0) mg(0), and both equality and strict inequality can occur.
A.2 Dual spaces
The notion of a dual space to a vector space V is extremely important for us. It is also
a potential point of confusion, as it seems, for whatever reason, to be a slippery concept.
Given a nite-dimensional vector space V (let us agree to now restrict to vector spaces
over R), the dual space to V is the set V

of linear maps from V to R. If V

, we shall
alternately write (v), v, or ; v) to denote the image in R of v V under . Note that
since dim(R) = 1, V

is a vector space having dimension equal to that of V . We shall often


call elements on V

one-forms.
Let us see how to represent elements in V

using a basis for V . Given a basis e1, . . . , en


for V , we dene n elements of V

, denoted e
1
, . . . , e
n
, by e
i
(ej) =
i
j
, i, j = 1, . . . , n, where

i
j
denotes the Kronecker delta

i
j
=

1, i = j
0, otherwise.
The following result is important, albeit simple.
A.2.1 Proposition If e1, . . . , en is a basis for V then e
1
, . . . , e
n
is a basis for V

, called
the dual basis.
Proof First let us show that the dual vectors e
1
, . . . , e
n
are linearly independent. Let
c1, . . . , cn R have the property that
cie
i
= c1e
1
+ +cne
n
= 0.
238 A Linear algebra 06/01/2004
For each j = 1, . . . , n we must therefore have cie
i
(ej) = ci
i
j
= cj = 0. This implies linear
independence. Now let us show that each dual vector V

can be expressed as a linear


combination of e
1
, . . . , e
n
. For V

dene 1, . . . , n R by i = (ei), i = 1, . . . , n.
We claim that = ie
i
. To check this, it suces to check that the two one-forms and ie
i
agree when applied to any of the basis vectors e1, . . . , en. However, this is obvious since
for j = 1, . . . , n we have (ej) = j and ie
i
(ej) = i
i
j
= j.
If e1, . . . , en is a basis for V with dual basis e
1
, . . . , e
n
then we may write V

as
= ie
i
for some uniquely determined 1, . . . , n R. If v V is expressed as v = v
i
ei
then we have
(v) = ie
i
(v
j
ej) = iv
j
e
i
(ej) = iv
j

i
j
= iv
i
.
Note that this makes the operation of feeding a vector to a one-form look an awful lot like
taking the dot product, but it is in your best interests to refrain from thinking this way.
One cannot take the dot product of objects in dierent spaces, and this is the case with
(v) since V

and v V . The proper generalisation of the dot product is given in


Section A.3.
A.2.2 More properties of the summation convention When we write a collection of elements
of a vector space, we use subscripts to enumerate them, e.g., v1, . . . , vk. For collections of
elements of the dual space, we use superscripts to enumerate them, e.g.,
1
, . . . ,
k
. The
components of a vector with respect to a basis are written with indices as superscripts. The
components of a dual vector with respect to a basis for the dual space are written with indices
as subscripts.
A.3 Bilinear forms
We have multiple opportunities to dene mechanical objects that are quadratic. Thus
the notion of a bilinear form is a useful one in mechanics, although it is unfortunately not
normally part of the background of those who study mechanics. However, the ideas are
straightforward enough.
We let V be nite-dimensional R-vector space. A bilinear form on V is a map B: V
V R with the property that for each v0 V the maps v B(v, v0) and v B(v0, v) are
linear. Thus B is linear in each entry. A bilinear form B is symmetric if B(v1, v2) =
B(v2, v1) for all v1, v2 V , and skew-symmetric if B(v1, v2) = B(v2, v1) for all v1, v2 V .
If e1, . . . , en is a basis for V , the matrix for a bilinear for B in this basis is the collection
of n
2
number Bij = B(ei, ej), i, j = 1, . . . , n. B is symmetric if and only if Bij = Bji,
i, j = 1, . . . , n, and skew-symmetric if and only if Bij = Bji, i, j = 1, . . . , n.
A.3.1 More properties of the summation convention Note that the indices for the matrix of a
bilinear form are both subscripts. This should help distinguish bilinear forms from linear
maps, since in the latter there is one index up and one index down. If B is a bilinear form
with matrix Bij, and if u and v are vectors with components u
i
, v
i
, i = 1, . . . , n, then
B(u, v) = Biju
i
v
j
.
An important notion attached to a symmetric or skew-symmetric bilinear form is a map
from V to V

. If B is a bilinear form which is either symmetric or skew-symmetric, we dene


06/01/2004 A.4 Inner products 239
a map B

: V V

by indicating how B

(v) V

acts on vector in V . That is, for v V


we dene B

(v) V

to be dened by
B

v; u) = B(u, v), u V.
The rank of B is dened to be rank(B) = dim(image(B

)). B is nondegenerate if
rank(B) = dim(V ). In this case B

is an isomorphism since dim(V ) = dim(V

), and we
denote the inverse by B

: V

V .
A.3.2 More properties of the summation convention If e1, . . . , en is a basis for V with dual
basis e
1
, . . . , e
n
, then B

(v) = Bijv
j
e
i
. If B is nondegenerate then B

() = B
ij
jei, where
B
ij
, i, j = 1, . . . , n, are dened by B
ij
Bjk =
i
k
.
This statement is the content of Exercise E2.17.
For symmetric bilinear forms, there are additional concepts which will be useful for us.
In particular, the following theorem serves as the denition for the index, and this is a useful
notion.
A.3.3 Theorem If B is a symmetric bilinear form on a vector space V, then there exists a
basis e1, . . . , en for V so that the matrix for B in this basis is given by

1 0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0 0 0 0
0 0 1 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 0 0
0 0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 0

.
The number of nonzero elements on the diagonal of this matrix is the rank of B. The number
of 1s along the diagonal is called the index of B.
This theorem is proved by the Gram-Schmidt argument. Note that the number of +1s on
the diagonal of the matrix in the theorem is given by rank(B) ind(B).
A.4 Inner products
An important special case is when all elements on the diagonal in Theorem A.3.3 have
the same sign. If all diagonal elements are +1 then we say B is positive-denite and if
all diagonal elements are 1 then we say B is negative-denite. Clearly, B is positive-
denite if and only if B(v, v) > 0 whenever v ,= 0, and B is negative-denite if and only if
B(v, v) < 0 whenever v ,= 0. A symmetric, positive-denite bilinear form is something with
which you are doubtless familiar: it is an inner product.
The most familiar example of an inner product is the standard inner product on R
n
.
We denote this inner product by gcan and recall that it is dened by
gcan(x, y) =
n

i=1
x
i
y
i
,
240 A Linear algebra 06/01/2004
if x = (x
1
, . . . , x
n
) and y = (y
1
, . . . , y
n
). If e1, . . . , en is the standard basis for R
n
, i.e., ei
consists of all zeros, with the exception of a +1 in the ith entry, then one veries that the
matrix for gcan in this basis is the n n identity matrix, denoted In.
On a vector space V with an inner product g, it is possible to dene the notions of
symmetry and skew-symmetry for linear transformations A: V V . To wit, a linear
transformation A is g-symmetric if g(Av1, v2) = g(v1, Av2) for every v1, v2 V , and is
g-skew-symmetric if g(Av1, v2) = g(v1, Av2) for every v1, v2 V . Often, when the inner
product is understood, we shall just say symmetric or skew-symmetric. However, one
should be sure to understand that an inner product is necessary to make sense of these
notions. Add oriented
normal vector
A.5 Changes of basis
In our presentation of Lagrangian mechanics, often objects are characterised by how they
alter under changes of coordinate. This is reected on the linear level by changes of basis.
Let us characterise how components of the objects discussed above behave when bases are
changed.
We let V be a nite-dimensional R-vector space with E = e1, . . . , en and F =
f1, . . . , fn bases for V . Since both E and F are bases, we may write
fi = P
j
i
ej, i = 1, . . . , n
and
ei = Q
j
i
fj, i = 1, . . . , n,
for 2n
2
constants P
i
j
and Q
i
j
, i, j = 1, . . . , n. Furthermore we have
ei = Q
j
i
fj = Q
j
i
P
k
j
ek.
Since E is linearly independent, this implies that
Q
j
i
P
k
j
=
i
k
, i, k = 1, . . . , n.
Thus P
i
j
and Q
i
j
, i, j = 1, . . . , n, are the components of matrices which are inverses of one
another.
We may also nd relations between the dual bases E

= e
1
, . . . , e
n
and F

=
f
1
, . . . , f
n
for V

. We may certainly write


f
i
= A
i
j
e
j
, i = 1, . . . , n,
for some constants A
i
j
, i, j = 1, . . . , n. We then have

i
j
= f
i
(fj) = A
i
k
e
k
(P

j
e) = A
i
k
P
k
j
, i, j = 1, . . . , n.
Therefore, we conclude that A
i
j
= Q
i
j
, i, j = 1, . . . , n, so that we have
f
i
= Q
i
j
e
j
, i = 1, . . . , n.
In like manner we have
e
i
= P
i
j
f
j
, i = 1, . . . , n.
06/01/2004 A.5 Changes of basis 241
Let v V and write
v = v
i
ei = v
i
fi.
Using the relation between the basis vectors we have
v
i
ei = v
i
Q
j
i
fj = v
j
fj.
Since F is linearly independent, this allows us to conclude that the components v
i
, i =
1, . . . , n, and v
i
, i = 1, . . . , n, are related by
v
j
= v
i
Q
j
i
, j = 1, . . . , n.
Similarly, if V

, then we write
= ie
i
= if
i
.
Proceeding as we did for vectors in V , we compute
ie
i
= iP
i
j
f
j
= jf
j
.
Since F

is linearly independent we conclude that the components


i
, i = 1, . . . , n, and
i
,
i = 1, . . . , n, are related by
j = iP
i
j
, j = 1, . . . , n.
Now let A: V V be a linear map. The matrix of A in the basis E, A
i
j
, i, j = 1, . . . , n,
are dened by
Aei = A
j
i
ej, i = 1, . . . , n.
Similarly, the matrix of A in the basis F,

A
i
j
, i, j = 1, . . . , n, are dened by
Afi =

A
j
i
fj, i = 1, . . . , n.
We write

i
f = Afi = P
j
i
Aej = P
j
i
A
k
j
ek = P
j
i
A
k
j
Q

k
f, i = 1, . . . , n.
Therefore, since F is linearly independent, we have

i
= P
j
i
A
k
j
Q

k
, i, = 1, . . . , n.
Note that this is the usual similarity transformation.
Finally, let us look at a bilinear map B: V V R. We let the matrix of B in the basis
E be dened by
Bij = B(ei, ej), i, j = 1, . . . , n,
and the matrix of B in the basis F be dened by

Bij = B(fi, fj), i, j = 1, . . . , n.


Note that we have

Bij = B(fi, fj) = B(P


k
i
ek, P

j
e) = P
k
i
P

j
Bk, i, j = 1, . . . , n.
This relates for us the matrices of B in the two bases.
242 A Linear algebra 06/01/2004

Potrebbero piacerti anche