Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Chapter 1
Topology
We start by defining a topological space.
Chapter 1. Topology
3
the same as the usual definition of continuity, which says that
Chapter 2
Manifolds
Now that we have the notions of open sets and continuity, we are
ready to define the fundamental object that will hold our attention
during this course.
More precisely, a topological space M is a smooth ndimensional manifold if the following are true:
i) We can cover the space with open sets U , i.e. every point of
M lies within some U .
ii) a map : U Rn , where is one-to-one and onto
some open set of Rn . is continuous, 1 is continuous, i.e.
V Rn is a homeomorphism for V .
(U , ) is called a chart (U is called the domain of the
chart). The collection of charts is called an atlas.
iii) In any intersection U U , the maps 1 , which are
called transition functions and take open sets of Rn to open
sets of Rn , i.e. 1 : (U U ) (U U ), are
smooth maps.
2
5
are real analytic, i.e. have a Taylor expansion at each point, which
converges. Smoothness of a manifold is useful because then we can
say unambiguously if a function on the manifold is smooth as we will
see below.
A complex analytic manifold is defined similarly by replacing Rn with Cn and assuming the transitions functions 1 to
be holomorphic (complex analytic).
2
(2.1)
The reason that it is not possible to cover the sphere with a single chart
is that the sphere is a compact space, and the image of a compact space under a continuous map is compact. Since Rn is non-compact, there cannot be a
homeomorphism between S n and Rn .
Chapter 2. Manifolds
image of the point on the sphere which the line has passed through.
For points on the lower hemisphere, the line first passes through the
equatorial plane (image point) before reaching the sphere (source
point). Then using similarity of triangles we find (Exercise!) that
the coordinates on the equatorial plane Rn of the image of a point
on S n \{N } is given by
x2
xn+1
1 2
n+1
N : x , x , , x
7
.
(2.2)
, ,
1 x1
1 x1
Similarly, the stereographic projection from the south pole is
S : S n \{S} Rn ,
1
n+1
x ,x , ,x
7
x2
xn+1
,
,
1 + x1
1 + x1
.
(2.3)
If we write
z=
xn+1
x2
,
,
1 x1
1 x1
,
(2.4)
we find that
2
n+1 2
x2
x
1 (x1 )2
1 + x1
2
|z|
+
+
=
=
(2.5)
1 x1
1 x1
(1 x1 )2
1 x1
The overlap between the two sets is the sphere without the poles.
Then the transition function between the two projections is
S N : Rn \{0} Rn \{0},
z 7
z
.
|z|2
(2.6)
7
Then any element of the group can be written as g(a) where
a = (a1 , , an ) . Since the composition of two elements of G must
be another element of G, we can write g(a)g(b) = g((a, b)) where
= (1 , , n ) are n functions of a and b. Then for a Lie group,
the functions are smooth (real analytic) functions of a and b.
These definitions of a Lie group are equivalent, i.e. define the
same objects, if we are talking about finite dimensional Lie groups.
Further, it is sufficient to define them as smooth manifolds if we are
interested only in finite dimensions, because all such groups are also
real analytic manifolds. Apparently there is another definition of
a Lie group as a topological group (like n-parameter continuous
group, but without an a priori restriction on n, in which the composition map (x, y) 7 xy 1 is continuous) in which it is always possible
to find an open neighbourhood of the identity which does not contain
a subgroup.
Any of these definitions makes a Lie group a smooth manifold,
an n-dimensional Lie group is an n-dimensional manifold.
2
The phase space of N particles is a 6N -dimensional manifold, 3N
coordinates and 3N momenta.
2
The M
obius strip is a 2-dimensional manifold.
2
The space of functions with some specified properties is often a manifold. For example, linear combinations of solutions of
Schr
odinger equation which vanish outside some region form a manifold.
2
Finite dimensional vector spaces are manifolds.
2
Infinite dimensional vector spaces with finite norm (e.g. Hilbert
spaces) are manifolds.
2
Chapter 2. Manifolds
not connected.
2
Rotations in three dimensions can be represented by 3 3 real
orthogonal matrices R satisfying RT R = I. Reflection is represented
by the matrix P = I. The space of 3 3 real orthogonal matrices
is a connected manifold.
2
The space of all n real non-singular matrices is called GL(n, R).
This is an n2 -dimensional Lie group and connected manifold.
2
Chapter 3
Tangent vectors
Vectors on a manifold are to be thought in terms tangents to the
manifold, which is a generalization of tangents to curves and surfaces, and will be defined shortly. But a tangent to a curve is like
the velocity of a particle at that point, which of course comes from
motion along the curve, which is its trajectory. And motion means
comparing things at nearby points along the trajectory. And comparing functions at nearby points leads to differentiation. So in order
to get to vectors, let us first start with the definitions of these things.
(3.1)
f is differentiable at P if f 1 is differentiable at (P ).
In other words, f is differentiable at P if the coordinates y i = f i (x )
of Q are differentiable functions of the coordinates x of P .
2
1
10
11
df
derivative. The rate of change of f along is written as
.
dt
Suppose another curve another curve (s) meets (t) at some
point P , where s = s0 and t = t0 , such that
d
d
(f ) = (f )
f C (M)
(3.2)
dt
ds
P
P
That is, we are considering a situation where two curves are tangent
to each other in geometric and parametric sense. Let us introduce a
convenient notation. In any chart containing the point P, let us
write (P ) = (x1 , , xn ). Let us write f = (f 1 ) ( ),
so that the maps are
f 1 : Rn R,
n
: I R ,
x 7 f (x) or f (xi )
i
t 7 {x ((t))}.
(3.3)
(3.4)
(3.5)
d
f dxi ((s))
d
(f ) = f (x((s))) =
.
(3.6)
ds
ds
xi
ds
Since f is arbitrary, we can say that two curves , have the same
tangent vector at the point P M (where t = t0 and s = s0 ) iff
dxi ((t))
dxi ((s))
=
.
(3.7)
dt
ds
t=t0
s=s0
We can say that these numbers completely determine the rate of
change of any function along the curve or at P. So we can define
the tangent to the curve.
f 7 P (f )
12
Chapter 4
Tangent Space
The set of all tangent vectors (to all curves) at some point P
M is the tangent space TP M at P.
2
Proposition: TP M is a vector space with the same dimensionality n as the manifold M.
Proof: We need to show that TP M is a vector space, i.e.
XP + YP TP M ,
(4.1)
aXP TP M ,
(4.2)
XP , YP TP M, a R.
That is, given curves , passing through P such that XP =
P , YP = P , we need a curve passing through P such that
P (f ) = XP (f ) + YP (f )f C (M).
Define : I Rn in some chart around P by = +
(P ). Then is a curve in Rn , and
=:I M
(4.3)
14
.
=
(f
)
f
=
(f
)
(4.5)
k
k
dt
xk P
P
P
f=
f
(4.7)
xk P
xk
xk (P )
(P )
Note that : I Rn , t 7 (x1 ((t)), , xn ((t))) are the coordinates of the curve , so we can use the chain rule of differentiation
to write
d i
1
vP (f ) =
(f )
(x )
(4.10)
i
x
t=0
(P ) dt
1
vP (xi ) .
(4.11)
=
(f )
i
x
(P )
The first factor is exactly as shown in Eq. (4.7), so we can write
vP (f ) =
f v (xi )
f C (M)
(4.12)
xk P P
15
i.e., we can write
i
vP = vP
xk
vP TP M
(4.13)
where vPi = vP (xi ). Thus the vectors x k P span TP M . These are
to be thought of as tangents to the coordinate curves in. These
can be shown to be linearly independent as well, so x k P form a
basis of TP M and vPi are the components of vP in that basis.
f, g C (M) and R
(4.14)
(4.15)
(4.16)
Chapter 5
Dual space
(5.1)
uP , vP TP M and a R .
The dual space is a vector space under the operations of vector addition and scalar multiplication defined by
a1 1 + a2 2 : vP 7 a1 1 (vP ) + a2 2 (vP ) .
(5.2)
Vector
column vectors
kets |i
functions
Dual vector
row vector
bras h|
linear functionals, etc.2
17
covector df , given by df (vP ) = vP (f ) called the differential or
2
Since vP is linear, so is df ,
gradient of f .
(5.3)
vP , wP TP M, a R .
Thus df TP M .
Proposition: TP M is also n-dimensional.
Proof: Consider a chart with coordinate functions xi . Each xi
is a smooth function xi : M R . then the differentials dxi satisfy
i
i
1
i
= ji . (5.4)
=
(x ) =
x
dx
xj
xj
xj
P
(P )
=0
=0
xj P
i
i dx
=0
xj P
i ji = 0 i.e. j = 0 .
(5.5)
So the dxi are linearly independent.
Finally,
given any covector , consider the covector =
i
xi P dx . Then letting this act on a coordinate basis vector, we
get
xj P
i
=
dx
xj P
xi P
xj P
i = 0j
(5.6)
j
x P
xi P j
18
as
= i dx
where
i =
xi
,
(5.7)
f
i (df )i = df
=
xi P
xi (P )
(5.8)
Since
is the dual basis in TP M to {dxi }, in order
i
P
x
=
(5.10)
y i P
y i P xj P
These formulae can be generalized to arbitrary bases.
Given a vector v, it is not meaningful to talk about its dual, but
given a basis {ea }, we can define its dual basis { a } by a (eb ) = ba .
We can make a change of bases by a linear transformation,
a 7 0a = Aab b ,
(5.11)
(5.12)
19
Similarly, if v is a vector, we can write
v = v a ea = v 0a e0a = v 0a (A1 )ba eb ,
(5.13)
Chapter 6
Vector fields
i
vP = v
,
(v i )P = vP (xi ) .
(6.2)
xi P
=v
d i
x ((t)) = v i (x(t)) ,
dt
(6.3)
21
equations guarantees, at least for small t (i.e. locally), the existence
of exactly one solution. The uniqueness of these solutions implies
that the integral curves of a vector field do not cross.
One use of integral curves is that they can be thought of as coordinate lines. Given a smooth vector field v such that v|P 6= 0, it
is possible to define a coordinate system {xi } in a neighbourhood
= v , i.e.,
d i
x ((t)) = v i ((t)) v(xi ((t))) ,
dt
(0) = P .
(6.4)
(6.5)
(6.6)
(6.7)
22
(6.8)
(6.9)
f, g C (M),
The set of all (real) vector fields V (M) on a manifold M has the
structure of a (real) vector space under vector addition defined by
(u + v)(f ) = u(f ) + v(f ),
u, v V (M),
R.
(6.10)
f C (M),
P M.
(6.11)
23
xi
with v i = v(xi ) ,
(6.13)
Chapter 7
(7.1)
(7.2)
While this looks utterly trivial at this point, this concept will become
increasingly useful later on.
(7.3)
2
(7.4)
25
The pushforward is linear,
f (v1 + v2 ) = f v1 + f v2
(7.5)
f (v) = f v .
(7.6)
i.e.
v TP M1 .
(7.7)
(7.8)
xi
P
= f
xi P
y Q
y i
(7.9)
f
= f
(y )
xi P
xi P
(7.10)
(7.11)
26
But
f v(g) = v(g f ) ,
(7.12)
so
f
xi
(y ) =
P
xi
(y f ) .
(7.13)
y (x)
=
(y f ) =
.
(7.14)
f
xi P
xi P
xi P
Because we are talking about derivatives of coordinates, these are
actually done in charts around P and Q = f (P ) , so the chart maps
are hidden in this equation.
The right hand side is called the Jacobian matrix (of y (x) =
y f with respect to xi ). Note that since m and n may be unequal,
this matrix need not be invertible and a determinant may not be
defined for it.
2
For the basis vectors, we can then write
y (x)
f
=
(7.15)
xi P
xi P y f (P )
Since f is linear, we can use this to find the components of (f v)Q
for any vector vP ,
i
f vP = f vP
xi P
i
= vP f
xi P
i y (x)
= vP
(7.16)
xi P y f (P )
i y (x)
(f vP ) = vP
.
(7.17)
i
x P
Note that since f is linear, we know that the components of f v
should be linear combinations of the components of v , so we can
27
already guess that (f vP ) = Ai vPi for some matrix Ai . The matrix
is made of first derivatives because vectors are first derivatives.
Another example of the pushforward map is the following. Remember that tangent vectors are derivatives along curves. Suppose
vP TP M is the derivative along . Since : I M is a map,
we can consider pushforwards under , of derivatives on I . Thus for
: I M , t 7 (t) = P , and for some g : M R ,
d
d
g =
(g )|t=0
dt t=0
dt
= P (g)|t=0 = vP (g) ,
(7.18)
so
d
dt
= vP
(7.19)
t=0
= v|(t)
(7.20)
dt t
for all t in some interval containing P .
2
Even though in order to define the pushforward of a vector v
under a map f , we do not need f to be invertible, the pushforward
of a vector field can be defined only if f is both one-to-one and onto.
If f is not one-to-one, different points P and P 0 may have the
same image, f (P ) = Q = f (P 0 ) . Then for the same vector field v we
must have
f v|Q = f (vP ) = f (vP 0 ) ,
(7.21)
(7.22)
Chapter 8
Lie brackets
A vector field v is a linear map C (M) C (M) since it is basically a derivation at each point, v : f 7 v(f ) . In other words, given
a smooth function f , v(f ) is a smooth function on M . Suppose we
consider two vector fields u , v . Then u(v(f )) is also a smooth function, linear in f . But is uv u v a vector field? To find out, we
consider
u(v(f g)) = u(f v(g) + v(f )g)
= u(f )v(g) + f u(v(g)) + u(v(f ))g + v(f )u(g) . (8.1)
We reorder the terms to write this as
uv(f g) = f uv(g) + uv(f )g + u(f )v(g) + v(f )u(g) ,
(8.2)
(8.3)
Thus
(uv vu)(f g) = f (uv vu)(g) + (uv vu)(f )g ,
(8.4)
(8.5)
29
f
,
xi
(8.6)
so that
i f
u(v(f )) = u
v
xj
xi
v i f
2f
= uj j i + uj v i j i ,
x x
x x
i
u f
2f
v(u(f )) = v j j i + uj v i j i .
x x
x x
j
(8.7)
Subtracting, we get
u(v(f )) v(u(f )) = uj
i
v i f
j u f
v
,
xj xi
xj xi
(8.8)
i
v i
j u
v
xj
xj
(8.9)
(8.10)
The commutator is
useful
for the following reason: Once we have a
,
= 0,
(8.11)
xi xj
because partial derivatives commute. So n vector fields will form a
coordinate system only if they commute, i.e., have vanishing commutators with one another. Then the coordinate lines are the integral
30
(8.12)
and ey =
being the Cartesian coordinate basis
x
y
vectors, and
p
x
y
cos = ,
sin = ,
r = x2 + y 2
(8.13)
r
r
with ex =
Chapter 9
Lie algebra
An (real) algebra is a (real) vector space equipped with a bilinear operation (product) under which the algebra is closed, i.e., for
an algebra A
i) x y A
x, y A
ii) (x + y) z = x z + y z
x (y + z) = x y + x z
x, y, z A ,
, R .
(9.1)
2
32
(9.2)
If a =
(9.4)
(9.5)
Chapter 10
Local flows
We met local flows and integral curves in Chapter 6. Given a vector
field v , write its local flow as t .
(10.1)
and f C (M1 ) if is C .
The pushforward of a vector vP is defined by
vP (f ) = vP (f ) = vP ( f )
vP TP M1 ,
vP T(P ) M2 .
(10.2)
(10.3)
(10.4)
34
(10.5)
(10.6)
Proof:
[u , v](f ) = [u , v] (f ) 1
= u (v (f )) 1 u v ,
(10.7)
[ u , v](f ) = u ( v (f )) u v
while
= u ( v (f ) ) 1 u v
= u v (f ) 1 1 u v
= u (v (f )) 1 u v .
(10.8)
(v ( f ))
(( v) (f )) = v ( f ) ,
v = v .
(10.9)
(10.10)
35
This expresses invariance under , and is satisfied by all differential
operators invariant under .
Consider a vector field u , and the local flow (or one-parameter
diffeomorphism group) t corresponding to u ,
t (Q) = Q (t) ,
(10.11)
Q (f ) =
(10.12)
(10.13)
(10.14)
f (x, t) =
(10.15)
v i (x) i f (x, t)
t
x
i=1
(10.16)
d
f
f (x, t) =
(t g) = v(f ) v i i ,
t
dt
x
(10.17)
36
f
f
(10.18)
f (x, t) = (x y)
t
x y
with initial condition f (x, 0) = x2 + y 2 . The corresponding vector
field is v(x) = (x y, x + y) . The integral curve passing through
the point P = (x0 , y0 ) is given by the coordinates
(t) = (vx (P )t + x0 , vy (P )t + y0 ) ,
(10.19)
(10.20)
= t (x, y) ,
the flow of v. So the solution is
f (x, t) = t f (x, 0) = f (x, 0) t (x, y)
= [(x y)t + x]2 + [(x + y)t + y]2
= (x y)2 t2 + x2 + 2(x y)xt + (x y)2 t2 + y 2 2(x y)yt
= 2(x y)2 t2 + (x2 + y 2 )(1 + 2t) 4xyt .
(10.21)
Chapter 11
Lie derivative
Given some diffeomorphism , we have Eq. (10.5) for pushforwards
and pullbacks,
( v)f = 1 v ( (f )) .
(11.1)
We will apply this to the flow t of a vector field u , defined by
d
(t f )
= u(f ) .
(11.2)
dt
t=0
Q
Applying this at t , we get
t v(f ) = 1
v t (f )
t
= t v t (f ) ,
(11.3)
where we have used the relation 1
t = t . Let us differentiate this
equation with t ,
d
d
(t v)(f )
= t v t (f )
(11.4)
dt
dt
t=0
t=0
On the right hand side, t acts linearly on vectors and v acts linearly
on functions, so we can imagine At = t v as a kind of linear operator
acting on the function ft = t f . Then the right hand side is of
the form
d
d
d
At ft
=
At ft
+ At ft
dt
dt
dt t=0
t=0
t=0
d
d
+ At
=
t v ft
t (f )
dt
dt
t=0
t=0
= u (v(f ))
v (u(f ))
t=0
t=0
= [u , v](f ) .
(11.5)
t=0
37
38
t0
t vt (P ) vP
t
= [u , v] .
(11.6)
u(f ) = lim
(11.8)
u (f + ag) = u f + au g .
(11.9)
u (f v) = (u f ) v + f u v
f C (M)
(11.10)
.
(11.11)
39
If there are two vector fields u and v which leave f invariant,
u f = 0 = v f . But we know from the Eq. (11.8), which defines
the Lie derivative of a function that
u+av f = u f + av = 0
and
[u , v ]f = [u ,v] f = 0 .
a R
(11.12)
u v = v .
(11.13)
(11.14)
Thus again the vector fields leaving w invariant form a Lie algebra.
T M =
TP M .
(11.15)
(11.17)
40
f i
dx
xi
(11.19)
in some chart.
2
For an arbitrary 1-form , we can write in a chart and for any
vector field v ,
= i dxi ,
v = vi
,
xi
(v) = i v i .
(11.20)
i0 Aij v j = j v j
i0 Aij = j
(11.22)
Aij =
u (v) = u i v i = uj j i v i
x
i
v i
= uj j v i + uj i j .
x
x
(11.26)
41
But we want to define things such that
u (v) = (u ) (v) + (u v) .
(11.27)
We already know the left hand side of this equation from Eq. (11.26),
and the right hand side can be calculated in a chart as
(u ) (v) + (u v) = (u )i v i + i (u v)i
= (u )i v i + i [u , v]i
i
i
i
j v
j u
= (u )i v + i u
v
. (11.28)
xj
xj
Equating the right hand side of this with the right hand side of
Eq. (11.26), we can write
(u )i = uj
uj
i
+
.
j
xj
xi
(11.29)
v j = ij .
xi
Putting this into the formula for Lie derivatives, we get
v=
= [u , v]j j
xi
x
j
j
k v
k u
= u
v
k
k
xj
x
x
uj
= 0 ik k
xj
x
j
u
=
.
i
x xj
(11.30)
(11.31)
ui j
uk j
dx
=
dx .
xj
xj
(11.33)
42
Chapter 12
Tensors
So far, we have defined tangent vectors, cotangent vectors, and also
vector fields and 1-forms. We will now define tensors. We will do
this by starting with the example of a specific type of tensor.
(12.1)
2
(12.2)
(12.3)
(12.4)
Ap,q
: TP M TP M TP M TP M
P
|
{z
}
|
{z
}
q times
p times
43
R
(12.5)
44
AP TP M TP M TP M TP M
|
{z
}
|
{z
}
p times
q times
(12.6)
We can define the components of this tensor in the same way that
we did for the (1, 2) tensor. Then a (p, q) tensor has components
which can be written as
a a
Ab11bqp .
Akij = A( i , j ; dxk ) .
(12.8)
x x
The components are functions of x in a chart. Thus we can write
this tensor field as
,
xk
(12.9)
where the indicates a product, in the sense that its action on two
vectors and a 1-form is a product of the respective components,
i
j
dx dx k (u, v; ) = ui v j k .
(12.10)
x
Thus we find, in agreement with the earlier definition,
A(u, v; ) = Akij ui v j k .
(12.11)
45
0
0
0
0
= Aki0 j 0 dx0i dx0j 0k0
xk
x
(12.12)
x0i i
xi
dx
,
=
xi
x0i0
x0i0 xi
0
(i and i are not equal in general), we get
0
dx0i =
(12.13)
xk
x0i i x0j
0
dx
dxj 0k0 k .
= Aki0 j 0
i
j
k
x
x
x
x x
(12.14)
Akij
0
x0i x0j xk
xi xj x0k0
0
xi xj x0k
.
x0i0 x0j 0 xk
0
Aki0 j 0
Aki0 j 0 = Akij
(12.15)
(12.16)
f
From now on, we will use the notation i for
and i f for
xi
xi
unless there is a possibility of confusion. This will save some space
and make the formulae more readable.
We can calculate the Lie derivative of a tensor field (with respect
to a vector field u, say) by using the fact that u is a derivative on
the modules of vector fields and 1-forms, and by assuming Leibniz
rule for tensor products. Consider a tensor field
mn
T = Tab
m n dxa dxb .
(12.17)
Then
mn
u T = (u Tab
) m n dxa dxb
mn
+Tab
(u m ) n dxa dxb +
mn
+Tab
m n (u dxa ) dxb , +
(12.18)
where the dots stand for the terms involving all the remaining upper and lower indices. Since the components of a tensor field are
functions on the manifold, we have
mn
mn
u Tab
= ui i Tab
,
(12.19)
46
ui
i ,
xm
u dxa =
ua i
dx .
xi
(12.20)
Putting these into the expression for the Lie derivative for T and
relabeling the dummy indices, we find the components of the Lie
derivative,
i
mn
(u T )mn
ab = u i Tab
in
mi
Tab
i um Tab
i un
mn
mn
a ui + + Tai
b ui .
+ Tib
(12.21)
Chapter 13
Differential forms
There is a special class of tensor fields, which is so useful as to have
a separate treatment. There are called differential pforms or
pforms for short.
(13.2)
Similarly, for a p-form , the components are i1 ip , and components are multiplied by (1) whenever any two indices are interchanged.
n
It follows that a p-form has
independent components in np
dimensions.
Any 1-form produces a function when acting on a vector field. So
given a pair of 1-forms A, B, it is possible to construct a 2-form
47
48
by defining
(u, v) = A(u)B(v) B(u)A(v),
u, v .
(13.3)
(13.4)
(13.5)
(13.6)
(13.7)
1
ij dxi dxj ,
2!
(13.8)
because then
1
ij dxi dxj dxj dxi (u, v)
2!
1
= ij ui v j uj v i = ij ui v j .
2!
(u, v) =
(13.9)
49
Similarly, a basis for pforms is
dxi1 dxip = dx[i1 dxip ] ,
(13.10)
(13.11)
1
ijk dxi dxj dxk .
3!
(13.12)
Note that there is a sum over indices, so that the factorial goes away if
we write each basis 3-form up to permutations, i.e. treating different
permutations as equivalent. Thus a pform can be written in
terms of its components as
=
1
i i dxi1 dxip .
p! 1 p
(13.13)
(13.14)
2
50
2
In three dimensions, consider two 1-forms = i dxi , = i dxi .
Then
1
=
(i j j i ) dxi dxj
2!
=
i j dxi dxj
=
(1 2 2 1 ) dx1 dx2
+ (2 3 3 2 ) dx2 dx3
+ (3 1 1 3 ) dx3 dx1 .
(13.16)
The components are like the cross product of vectors in three dimensions. So we can think of the wedge product as a generalization of
the cross product.
51
Cross-products are not associative, so there is a distinction between
cross-products and wedge products. In fact, for 1-forms in three
dimensions, the above equation is analogous to the identity for the
triple product of vectors,
a (b c) = (a b) c .
(13.20)
(13.21)
Proof: Consider the wedge product written in terms of the components. We can ignore the parentheses separating the basis forms
since the wedge product is associative. Then we exchange the basis
1-forms. One exchange gives a factor of 1 ,
dxip dxj1 = dxj1 dxip .
(13.22)
(13.24)
as wanted.
2
(13.26)
52
(v) = ( v)
(13.29)
i
i
Then we can consider the pullback dx of a basis 1-form dx .
For a general 1-form = i dxi , we have = (i dxi ) . But
(v) = ( v) = i dxi ( v) .
(13.30)
Now, dxi ( v) = dxi (v) and the thing on the right hand side is a
function on M1 , so we can write this as
(v) = ( i ) dxi (v) ,
(13.31)
(13.32)
(13.33)
53
Since u, v are arbitrary vector fields it follows that
( ) =
(dxi dxj ) = dxi dxj .
(13.34)
(13.35)
and we can continue this for any number of basis 1-forms. So for any
p-form , let us define the pullback by
(v1 , , vp ) = ( v1 , , vp ) ,
(13.36)
1
i1 ip dxi1 dxip .
p!
(13.37)
Chapter 14
Exterior derivative
The exterior derivative is a generalization of the gradient of a function. It is a map from p-forms to (p + 1)-forms. This should be a
derivation, so it should be linear,
p-forms , .
d( + ) = d + d
(14.1)
This should also satisfy Leibniz rule, but the algebra of p-forms is
not a commutative algebra but a graded commutator algebra, i.e.,
involves a factor of (1)pq for exchanges. So we need
d( ) = d + (1)pq d ,
(14.2)
d( ) = d + (1)p d .
(14.3)
or alternatively,
This will be the Leibniz rule for wedge products. Note that it gives
the correct result when one or both of , are 0-forms, i.e., functions.
The two formulas are identical by virtue of the fact that d is a
(q + 1)-form, so that
d = (1)p(q+1) d .
(14.4)
We will try to define the exterior derivative in a way such that it has
these properties.
Let us define the exterior derivative of a p-form in a chart as
d =
1
i i1 ip dxi1 dxip
p!
54
(14.5)
55
This clearly has the first property of linearity. To check the (graded)
Leibniz rule, let us write in components. Then
1
i i1 ip j1 jq dxi dxi1 dxjq
p!q!
1
=
i i1 ip j1 jq + i1 ip i j1 jq dxi dxi1 dxjq
p!q!
1
=
i i1 ip j1 jq dxi dxi1 dxip dxj1 dxjq
p!q!
1
+
(1)p i1 ip i j1 jq dxi1 dxip dxi dxj1 dxjq
p!q!
= d + (1)p d .
(14.6)
d( ) =
d(d) =
(14.8)
1
di1 ip dxi1 dxip ,
p!
(14.9)
(dA)
= A dx dx
1
= ( A A ) dx dx
2
= A A .
(14.10)
56
xdy ydx
.
x2 + y 2
(14.14)
57
Then
2x2
1
2y 2
1
dx dy
dy dx
d =
x2 + y 2 (x2 + y 2 )2
x2 + y 2 (x2 + y 2 )2
2
x2 + y 2
= 2
dx dy = 0 .
(14.15)
dx
dy
2
x + y2
(x2 + y 2 )2
dy = dr sin + r cos d
r2
r2
2
2
2
r cos + sin d
=
= d .
(14.16)
r2
Chapter 15
Volume form
n
The space of p-forms in n dimensions is
dimensional. So the
p
space of n-forms in n dimensions is 1-dimensiona, i.e., there is only
one independent component, and all n-forms are scalar multiples of
one another.
Choose an n-form field. Call it . Suppose 6= 0 at some point
P . Then given any basis {e } of TP M , we have (e1 , en ) 6= 0
since 6= 0 . Thus all vector bases at P fall into two classes, one for
which (e1 , en ) > 0 and the other for which it is < 0 .
Once we have identified these two classes, they are independent of
. That is, if 0 is another n-form which is non-zero at P , there must
be some function f 6= 0 such that 0 = f . Two bases which gave
positive numbers under will give the same sign both positive or
both negative under 0 and therefore will be in the same class.
So every basis (set of n linearly independent vectors) is a member of one of the two classes. These are called righthanded and
lefthanded.
2
A manifold is called orientable if it is possible to define a continuous n-form field which is non-zero everywhere on the manifold.
Then it is possible to choose a basis with the same handedness everywhere on the manifold continuously.
2
Euclidean space is orientable, the Mobius band is not.
59
linearly independent vectors define an n-dimensional parallelepiped.
If we define an n-form 6= 0 we can think of the value of these vectors
as the volume of this parallelepiped. This is called a volume form.
2
Once a volume form has been chosen, any set of n linearly independent vectors will define a positive or negative volume.
The integral of a function f on Rn is the sum of the values of f ,
multiplied by infinitesimal volumes of coordinate elements. Similarly,
we define the integral of a function f on an oriented manifold as the
sum of the values of f , multiplied by infinitesimal volumes. The way
to do that is the following.
Given a function f , define an n-form in a chart by = f dx1
dxn . To integrate over an open set U , divide it up into infinitesimal
cells, spanned by vectors
x
, x2 2 , , xn n
1
x
x
x
1
,
(15.1)
Adding up the contributions from all cells and taking the limit of cell
size going to zero, we find
Z
Z
=
f dn x .
(15.2)
(U )
60
Z
=
f (y 1 , y 2 )Jdy 1 dy 2
Z
=
f (y 1 , y 2 )Jd2 y ,
(15.4)
0 (U )
Chapter 16
Metric tensor
(16.1)
(16.2)
(16.3)
iii) non-degenerate:
g(v, w) = 0
v = 0.
62
(16.4)
(16.6)
T
(16.7)
63
(16.9)
2
Given a manifold with metric, there is a canonical volume form
dV (sometimes written as vol) , which in a coordinate chart reads
q
dV = | det g |dx1 dxn .
(16.10)
Note that despite the notation, this is not a 1-form, nor the gradient
of some function V . This is clearly a volume form because it is an
n-form which is non-zero everywhere, as g is non-degenerate.
We need to show that this definition is independent of the chart.
Take an overlapping chart. Then in the new chart, the corresponding
volume form is
q
0 |dx01 dx0n .
dV 0 = | det g
(16.11)
We wish to show that dV 0 = dV . In the overlap,
dx0 =
x0
dx = A dx (say)
x
(16.12)
x0
x0
= g A1 , A1
= A1 A1 g .
(16.13)
64
Thus
0
det g
= (det A)2 (det g ) .
(16.14)
q
q
1
0
| det g | = |det A|
| det g | ,
(16.15)
and so dV 0 = dV .
Chapter 17
Hodge duality
We will next define the Hodge star operator. We will defineit in a
chart rather than abstractly.
p
|g|
1 n g np+1 1 g n p 1 p , (17.1)
p!
where is a p-form.
2
The ? operator acts on forms, not on components.
Example: Consider R3 with metric +++, i.e.
g =
diag(1, 1, 1) . Then |g| g = 1 , g diag(1, 1, 1) . Write the coordinate basis 1-forms as dx, dy, dz . Their components are clearly
(dx)i = i1 , (dy)i = i2 , (dz)i = i3 ,
(17.2)
?dx =
dx2 dx3 dx3 dx2 = dx2 dx3 = dy dz .
2!
(17.3)
Similarly, ?dy = dz dx ,
?dz = dx dy .
65
66
(?1)1 n = |g|1 n
p
|g|
(?1) =
1 n dx1 dxn
n!
= dV
(17.4)
2
Example: p = n . Then
p
|g|
1 n g 1 1 g n n 1 n .
(?) =
n!
(17.5)
(17.7)
(17.8)
and
i.e., (?)2 = (1)s on 0-forms and n-forms.
In general, on a p-form in an n-dimensional manifold with signature (s, n s) , it can be shown in the same way that
(?)2 = (1)p(np)+s .
(17.9)
(17.10)
67
It is useful to work out the Hodge dual of basis p-forms. Suppose
we have a basis p-form dxI1 dxIp , where the indices are arranged in increasing order Ip > > I1 . Then its components are
I
p!I11 pp . So
? dxI1 dxIp 1 np
p
|g|
0
0
I
=
1 np 1 p g 1 1 g p p p! I10 p0
p
1
p!
p
1 I1
p Ip
= |g| 1 np 1 p g
g
.
(17.11)
We will use this to calculate ? .
For a p-form , we have
1
dx1 dxp
p! 1 p
X
=
I1 Ip dxI1 dxIp
(17.12)
where the sum over I means a sum over all possible index sets I =
I1 < < Ip , but there is no sum over the indices {I1 , , Ip }
themselves, in a given index set the Ik are fixed. Using the dual of
basis pforms, and Eq. (13.13), we get
X
? =
I1 Ip ? (dxI1 dxIp )
I
X
I
p
|g|
g 1 I1 g p Ip I1 Ip dx1 dxnp .
(n p)! 1 np 1 p
(17.13)
The sum over I is a sum over different index sets as before, and
the Greek indices are summed over as usual. Thus we calculate
p
|g| X
? =
1 np 1 p g 1 I1 g p Ip I1 Ip
(n p)!
I,J
dx1 dxnp J1 Jp dxJ1 dxJp
p
|g| X
=
1 np 1 p g 1 I1 g p Ip I1 Ip J1 Jp
(n p)!
I,J
(17.14)
68
We see that the set {1 , , np } cannot have any overlap with the
set J = {J1 , , Jp }, because of the wedge product. On the other
hand, {1 , , np } cannot have any overlap with {1 , , p } because is totally antisymmetric in its indices. So the set {1 , , p }
must have the same elements as the set J = {J1 , , Jp } , but they
may not be in the same order.
Now consider the case where the basis is orthogonal, i.e. g is
diagonal. Then g k Ik = g Ik Ik etc. and we can write
p
|g| X
? =
1 np I1 Ip g I1 I1 g Ip Ip I1 Ip J1 Jp
(n p)!
I,J
69
the standard order is positive. Thus we get
p X I I
? = |g|
1 p I1 Ip 1n dx1 dxn
I
p 1
= |g| 1 p 1 p dx1 dxn
p!
1 1 p
=
1 p (vol)
p!
(17.18)
p!
But both factors are invariant under a change of basis. So we can
now change back to our earlier basis, and find Eq. (17.18) even when
the metric is not diagonal. Note that the metric may not be diagonalizable globally or even in an extended region.
Chapter 18
Maxwell equations
We will now consider a particular example in physics where differential forms are useful. The Maxwell equations of electrodynamics are,
with c = 1 ,
E
E
B
t
B
B
E+
t
(18.1)
=j
(18.2)
=0
(18.3)
= 0.
(18.4)
The electric and magnetic fields are all vectors in three dimensions,
but these equations are Lorentz-invariant. We will write these equations in terms of differential forms.
Consider R4 with Minkowski metric g = diag(1, 1, 1, 1) . For
the magnetic field define a 2-form
B = Bx dy dz + By dz dx + Bz dx dy .
(18.5)
(18.6)
71
for the component labels x, y, z .
dB = d(B1 dy dz + B2 dz dx + B3 dx dy)
= t B1 dt dy dz + 1 B1 dx dy dz
+t B2 dt dz dx + 2 B2 dy dz dx
+t B3 dt dx dy + 3 B3 dz dx dy .
(18.7)
And
d(E dt) = d(E1 dx dt + E2 dy dt + E3 dz dt)
= 2 E1 dy dx dt + 3 E1 dz dx dt
+1 E2 dx dy dt + 3 E2 dz dy dt
+1 E3 dx dz dt + 2 E3 dy dz dt . (18.8)
Thus, remembering that the wedge product changes sign under each
exchange, we can combine these two to get
dF = (t B1 + 2 E3 3 E2 ) dt dy dz
+ (t B2 + 1 E3 3 E1 ) dt dz dx
+ (t B3 + +1 E2 2 E1 ) dt dx dy
+ (1 B1 + 2 B2 + 3 B3 ) dx dy dz
= (t B1 + ( E)1 ) dt dy dz
+ (t B2 + ( E)2 ) dt dz dx
+ (t B3 + ( E)3 ) dt dx dy
+ ( B) dx dy dz .
(18.9)
?(dy dz) = dt dx ,
?(dz dx) = dt dy ,
?(dx dt) = dy dz ,
?(dy dt) = dz dx ,
?(dz dt) = dx dy .
(18.10)
(18.11)
72
(18.12)
(18.13)
(18.14)
(18.15)
Comparing this equation with Eq. (18.12) we find that the other two
Maxwell equations can be written as
d?F = ?j .
(18.16)
F ?F
(18.17)
2
This expression holds in both flat and curved spacetimes. For the
latter, with local coordinates (t, x, y, z) we find
F ?F = (B 2 E 2 ) g dt dx dy dz .
(18.18)
Chapter 19
Stokes theorem
We will next discuss a very beautiful result called Stokes formula.
This is actually a theorem, but we will not prove it, only state the
result and discuss its applications. So for us it is only a formula, but
still deep and beautiful.
74
Z
df =
M
Z1
i.e.
f
M
(19.2)
Z
=
(1 A2 2 A1 ) dx1 dx2
Z
=
(1 A2 2 A1 ) d2 x .
(19.5)
(D)
Gauss divergence theorem is a special case of Stokes theorem. Before getting to Gauss theorem, we need to make a new
definition. Consider an n-form 6= 0 on an n-dimensional manifold.
We can write this in a chart as
= f dx1 dxn
1
= f 1 n dx1 dxn .
n!
(19.6)
75
Given a vector field v , its contraction with is
1
v 1 dx2 dxn
(n 1)! 1 2 n
= f v 1 dx2 dxn f v 2 dx1 dx3 dxn +
v = (v, ) =
(19.7)
Then we can calculate
d(v ) = d(v, ) = 1 (f v 1 ) dx1 dx2 dxn
+2 (f v 2 ) dx1 dx2 dxn
+ + n (f v n ) dx1 dx2 dxn
= (f v ) dx1 dx2 dxn
1
= (f v ) .
f
In particular, if is the volume form, we can write
p
|g|
=
1 n dx1 dxn ,
n!
p
1
d(v (vol)) = p (v |g|)(vol) .
|g|
(19.8)
(19.9)
(19.10)
?v =
v dx1 dxn1
(n 1)! 1 n1
!
p
|g|
d?v = n
v dxn dx1 dxn1
(n 1)! 1 n1
p
(1)n1
=
1 n1 n
|g|v dx1 dxn
(n 1)!
(19.12)
76
p
(1)n+s1
p
( |g|v ) ,
|g|
(19.14)
d
Now suppose b is a 1-form normal to U , i.e. b
= 0 for any
dt
d
vector
tangent to U , and is an (n 1)-form such that b =
dt
(vol) . Since all n-forms are proportional, always exists. For the
same reason, if b 6= 0 on U , it is unique up to a factor. And b 6= 0 on
U because U is defined as the submanifold where one coordinate is
d
constant, usually set to zero, so that one component of
vanishes
dt
at any point on U , and therefore the corresponding component of
b can be chosen to be non-zero.
So b is unique up to a rescaling b b0 = f b for some nonvanishing
1
function f . But we can always scale 0 = f so that b0 0 =
b . Further, if we restrict to U , i.e. if acts only on tangent
77
vectors to U , we find that is an (n 1)-form on an (n 1)dimensional manifold, so it is unique up to scaling. Therefore, is
unique once b is given. Finally, for any vector v ,
(19.16)
v (vol) = v (b )
U
because all terms of the form b v gives zero for any choice of
(n 1) vectors on U .
Then we have
Z
Z
p
1
p ( |g|v )(vol) =
b(v)
|g|
U
U
Z
=
(n v ) .
(19.18)
U
Chapter 20
Lie groups
We start a brief discussion on Lie groups, mainly with an eye to
their structure as manifolds and also application to the theory of
fiber bundles.
79
Example: Rn \{0} is a Lie group under multiplication. So is
Cn \{0} .
J=
1nn
1nn
0
2
Example: O(p, q) = {R GL(p+q, R) | RT p,q R = p,q } , where
p,q =
1pp
0
1qq
2
Example: U (p, q) = {U GL(p + q, C) | U p,q U = p,q } .
Example The Special Orthogonal group SO(n) is the subgroup of O(n) for which determinant is +1. Similarly, the Special
unitary group SU (n) is the subgroup of U (n) with determinant
+1. Similarly for SO(p.q) and SU (p, q) .
The group U (1) is the group of phases U (1) = {ei | R} . As a
manifold, this is isomorphic to a circle S 1 .
The group SU (2) is isomorphic as a manifold to a three-sphere
S 3 . These are the only two spheres (other than the point S 0 ) which
admit a Lie group structure.
80
Left translation lg : G G
g 0 7 gg 0 ;
2
Right translation rg : G G
g 0 7 g 0 g .
2
These can be defined for any group, but are diffeomorphisms for
Lie groups. We see that
lg1 lg (g 0 ) = lg1 (gg 0 ) = g 1 gg 0 = g 0
(lg )1 = lg1
rg1 rg (g 0 ) = rg1 (g 0 g) = g 0 gg 1 = g 0
(20.2)
Chapter 21
X = lg (X)
(21.1)
lg (Xg0 ) = Xgg0
(21.2)
rg (Xg0 ) = Xg0 g
g G ,
g, g 0 G .
(21.3)
lg (Xe ) = Xg
(21.4)
(21.5)
82
(21.6)
(21.7)
Thus if X, Y L(G) ,
lg [X , Y ] = [lg X , lg Y ] = [X , Y ] ,
(21.8)
(21.10)
1 (2 v) = (1 2 ) v
(21.11)
(21.12)
83
So it follows that LX is a left-invariant vector field, and we have a
map Te G L(G) . Since the pushforward is a linear map, so is the
map X LX . We need to prove that this map is 1-1 and onto.
If LX = LY , we have
Y
LX
g = Lg
g G ,
(21.13)
so
Y
lg1 LX
g = lg 1 Lg
X=Y
( Te G) .
(21.14)
for any g G .
(21.15)
(21.16)
X
lg Xe = lg lg1 LX
g = Lg .
(21.17)
Then
(21.18)
(21.19)
Note that since commutators are defined for vector fields and not
vectors, the Lie bracket on Te G has to be defined using the commutator of left-invariant vector fields on G and the isomorphism
Te G L(G) .
84
[ti , tj ] = [tj , ti ]
X
X
k
k
Cij
tk =
Cji
tk
k
k
k
k
Cij
= Cji
,
(21.21)
l
m
m
l
l
= 0.
Cjl
Cilm + Cki
Cij
Ckl
+ Cjk
(21.22)
for X Te G
(21.23)
Chapter 22
sin t cos t
cos t sin t 0
Example: G = GL(3, R) ,
(t) = sin t cos t 0 .
0
0 et
The relation between 1-p subgroups and
and Te G is given by the
Theorem: The map 7 (0)
85
86
X (t) = LX
(t) = l(t) X
(22.1)
X
Since LX is left-invariant, lg0 LX
g = Lg 0 g . Consider the equation
d
d
X
(t) = L(t) = l(t) X
.
(22.2)
dt
dt t
(22.3)
(22.5)
= LX
(t) = l(t) X ,
(22.6)
= X) .
2
In a compact connected Lie group G , every element lies on some
1-p subgroup. This is not true in a non-compact G , i.e. there are
elements in G which do not lie on a 1-p subgroup. However, an
Abelian non-compact group will always have a 1-p subgroup, so this
remark applies only to non-Abelian non-compact groups.
For matrix groups, every 1-p subgroup is of the form
n
o
(t) = etM M fixed, t R .
(22.7)
87
Let us see why. Suppose {(t)} is a 1-p subgroup of the matrix
group. Then (t) is a matrix for each t , and
(s)(t) = (s + t) .
(22.8)
= (t)
.
(22.9)
Write (0)
(22.10)
for
all the 1-p subgroups (t) , so these are in fact the tangent vectors
at the identity. The allowed matrices {M } for a given matrix group
G thus form a Lie algebra with the Lie bracket being given by the
matrix commutator. This Lie algebra is isomorphic to the Lie algebra
of the group G . (We will not a give a proof of this here.)
We can find the Lie algebra of a matrix group by considering
elements of the form (t) = etM for small t , i.e.,
(t) = I + tM
(22.11)
88
(22.13)
(22.14)
(22.16)
Chapter 23
Fiber bundles
Consider a manifold M with the tangent bundle T M =
S
P M
TP M .
There is a projection map : E B , and if P B , the preimage 1 (P ) is homeomorphic, i.e. bicontinuously isomorphic, to
the standard fiber.
2
E is called the total space, but usually it is also called the bundle, even though the bundle is actually the triple (E, , B) .
90
91
B1 B2 , such that 2 F = f 1 . (This is of course better
understood in terms of a commutative diagram.)
Not all systems of coordinates are appropriate for a bundle. But it
is possible to define a set of fiber coordinates in the following way.
Given a differentiable fiber bundle with n-dimensional base manifold
B and p-dimensional fiber F , the coordinates of the bundle are given
by bundle morphisms onto open sets of Rn Rp .
2
(23.1)
A fiber bundle in which the typical fiber F is identical (or homeomorphic) to the structure group G, and G acts on F by left translation is called a principal fiber bundle.
2
Example: 1. Typical fiber = S 1 , structure group U (1) .
2. Typical fiber = S 3 , structure group SU (2) .
3. Bundle of frames, for which the typical fiber is GL(n , R) , as
is the structure group.
92
(23.2)
(23.3)
93
always linear transformations Tx : Ex Ex which are members of
the representation {D(g)} . Let us write the space of all sections of
this bundle as (E) . An element of (E) is a map from the base
spacce to the bundle. Such a map assigns an element of V to each
point of the base space.
(23.4)
(23.5)
In the other notation we have been using, v and v are images of the
same vector vx Vx , and v = D(g )v . A gauge transformation
T acts as
Tx : (x, v) 7 (x, gv) .
(23.6)
(23.7)
(23.8)
(23.9)
94
(g 1 )(x) = (g(x)) 1 .
(23.10)
Chapter 24
Connections
There is no canonical way to differentiate sections of a fiber bundle.
That is to say, no unique derivative arises from the definition of a
bundle. Let us see why. The usual derivative of a function on R is
of the form
f (x + ) f (x)
.
0
f 0 (x) = lim
(24.1)
(24.2)
96
(24.3)
(24.4)
Also,
D s = D (si ei ) = si ei + si D ei
= si ei + si Aij ej
= si + Aj i sj ei ,
(24.5)
(24.6)
(24.7)
97
where v is a vector field on M and (E) (i.e. = s).
Let us first check if the definition makes sense. Since g(x) G
for all x M , we know that g 1 (x) exists for all x. So
Dv0 () = Dv0 gg 1
= gDv g 1 ,
(24.8)
and thus D0 is defined on all for which Dv is defined, i.e. D0 exists
because D does. We have of course assumed that g(x) is differentiable
as a function of x.
Let us now check that D0 is a connection according to our definitions. D0 is linear since
Dv0 (1 + 2 ) = gDv g 1 (1 + 2 )
= gDv g 1 1 + gDv g 1 2
= Dv0 1 + Dv0 2 .
(24.9)
= gv(f )g 1 + gf Dv g 1
= v(f ) + f gDv g 1
= v(f ) + f Dv0 () .
(24.10)
Similarly,
0
Dv+w
= gDv+w g 1
= g Dv g 1 + Dw g 1
= gDv (g 1 ) + gDw g 1
0
= Dv0 + Dw
.
(24.11)
98
(24.13)
where i is the dual basis to {ei } . The gauge transformation is
then given by
Dv0 = gDv g 1
D0 = gD g 1
h
i
j i
i + A0ij ei = g g 1 + Aij g 1 ei ,
(24.14)
where, as always, the gs are in some appropriate representation of
G. Then we can write the right hand side as
h
i
i i
i + g g 1 j j + gA g 1 j j ei
h
i i
(24.15)
= i + g g 1 + gA g 1 j j ei .
From this we can read off
A0 = gA g 1 + g g 1 .
(24.16)
Chapter 25
Curvature
We start with a connection D , two vector fields v, w on B , and a
section s , all on some associated vector bundle of some principal
G-bundle E . Then Dv , Dw are both maps (E) (E) .
We will define the curvature of this connection D as a rule F
which, given two vector fields v , w , produces a linear map F (v, w) :
(E) (E) by
F (v, w)s = Dv Dw s Dw Dv s D[v,w] s .
(25.1)
Remember that
D v s = D v s = v D s
= v si + Aij sj ei
= v(si )ei + v Aij sj ei .
(25.2)
100
Inserting this into the previous equation and writing Dw Dv s similarly, we find
Dv Dw s Dw Dv s = [v , w](si )ei + [v , w] Aij sj ei
+ w v A ij v w Aij sj ei
+ v w Aij A j k A ij Aj k sk ei . (25.5)
Also,
D[v ,w] s = [v , w](si )ei + [v , w] Aij sj ei ,
(25.6)
so that
F (v, w)s = v w A ij Aij + Aik A kj A ik Akj sj ei .
(25.7)
Thus we can define F by
F ( , )s = F s = (F s)i ei = (F )ij sj ei ,
(25.8)
(25.9)
so that
(25.10)
It is not very difficult to work out that the curvature acts linearly
on the module of sections,
F (u, v)(s1 + f s2 ) = F (u, v)s1 + f F (u, v)s2 ,
(25.11)
(25.12)
(25.13)
Since F ( , )s is a section, so is
D (F s) = D [D , D ]s .
(25.14)
101
Similarly, since D s is a section, so is
F D s = [D , D ]D s .
(25.15)
D (F s) F D s = [D , [D , D ]]s .
(25.16)
Thus
Considering C sections, and noting that maps are associative under
map composition, we find that
[D , [D , D ]]s + cyclic = 0 .
(25.17)
F s = (F s)i ei = F ij sj ei ,
(25.18)
(25.19)
(25.20)
(D F ) s + cyclic = 0
D F + cyclic = 0 .
s
(25.21)
102
Then
Du0 Dv0 = Du0 gDv g 1
= gDu Dv g 1 ,
(25.23)
and thus
0
F 0 (u, v) Du0 Dv0 Dv0 Du0 D[u
,v]
= gDu Dv g 1 Dv Du g 1 gD[u ,v] g 1
= gF (u, v)g 1
0
F
= g F g 1 .
(25.24)