Sei sulla pagina 1di 100

Foundations of Mathematical Physics

Paul P. Cook and Neil Lambert


Department of Mathematics, Kings College London
The Strand, London WC2R 2LS, UK

email: paul.cook@kcl.ac.uk
email: neil.lambert@kcl.ac.uk

Contents
1 Classical Mechanics
1.1

Lagrangian Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1

Conserved Quantities . . . . . . . . . . . . . . . . . . . . . . . .

10

1.2

Noethers Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

1.3

Hamiltonian Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.3.1

Hamiltons equations. . . . . . . . . . . . . . . . . . . . . . . . .

14

1.3.2

Poisson Brackets . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

1.3.3

Duality and the Harmonic Oscillator . . . . . . . . . . . . . . . .

15

1.3.4

Noethers theorem in the Hamiltonian formulation. . . . . . . . .

16

2 Special Relativity and Component Notation


2.1
2.2

The Special Theory of Relativity . . . . . . . . . . . . . . . . . . . . . .

19

2.1.1

The Lorentz Group and the Minkowski Inner Product. . . . . . .

23

Component Notation. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

2.2.1

Matrices and Matrix Multiplication. . . . . . . . . . . . . . . . .

28

2.2.2

Common Four-Vectors . . . . . . . . . . . . . . . . . . . . . . . .

31

2.2.3

Classical Field Theory . . . . . . . . . . . . . . . . . . . . . . . .

33

2.2.4

Maxwells Equations. . . . . . . . . . . . . . . . . . . . . . . . . .

35

2.2.5

Electromagnetic Duality . . . . . . . . . . . . . . . . . . . . . . .

39

3 Quantum Mechanics
3.1

3.2

19

41

Canonical Quantisation . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

3.1.1

The Hilbert Space and Observables. . . . . . . . . . . . . . . . .

43

3.1.2

Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . .

45

3.1.3

A Countable Basis. . . . . . . . . . . . . . . . . . . . . . . . . . .

46

3.1.4

A Continuous Basis. . . . . . . . . . . . . . . . . . . . . . . . . .

49

The Schr
odinger Equation. . . . . . . . . . . . . . . . . . . . . . . . . .

51

3.2.1

52

The Heisenberg and Schrodinger Pictures. . . . . . . . . . . . . .

4 Group Theory

59

4.1

The Basics

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

4.2

Common Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

4.2.1

The Symmetric Group Sn . . . . . . . . . . . . . . . . . . . . . .

61

4.2.2

Back to Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

Group Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

4.3.1

71

4.3

The First Isomomorphism Theorem . . . . . . . . . . . . . . . .


3

CONTENTS
4.4

4.5
4.6
4.7

4.8

Some Representation Theory . . . . . . . . . . . . . . . . . . . . . . . .


4.4.1 Schurs Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2 The Direct Sum and Tensor Product . . . . . . . . . . . . . . . .
Lie Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lie Algebras: Infinitesimal Generators . . . . . . . . . . . . . . . . . . .
Everything you wanted to know about SU (2) and SO(3) but were afraid
to ask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.7.1 SO(3) = SU (2)/Z2 . . . . . . . . . . . . . . . . . . . . . . . . . .
4.7.2 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.7.3 Representations Revisited . . . . . . . . . . . . . . . . . . . . . .
The Invariance of Physical Law . . . . . . . . . . . . . . . . . . . . . . .
4.8.1 Translations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.8.2 Special Relativity and the Infinitesimal Generators of SO(1, 3). .
4.8.3 The Proper Lorentz Group and SL(2, C). . . . . . . . . . . . . .
4.8.4 Representations of the Lorentz Group and Lorentz Tensors. . . .

72
73
76
80
82
84
85
87
91
93
93
93
95
97

Chapter 1

Classical Mechanics
1.1

Lagrangian Mechanics

Newtons second law of motion states that for a body of constant mass m acted on by
a force F
d
F = (p) = m
x
(1.1)
dt
x is the position of the body and x dx
where p is the linear momentum (p mx),
dt .
Hence if F = 0 then the linear momentum is conserved p = 0.
F is called a conservative force if the two equivalent two statements hold:
(i) The work done under the force is path-independent, and
(ii) The force may derived from a scalar field: F = V .
2 + V is constant.
If so then the energy, defined as E = 21 m|x|
The work done by a mass m subject to a force F moving on a path from x(t1 ) to
x(t2 ) is
Z x(t2 )
W =
F dx
x(t1 )
t2

F xdt

=
t1
Z t2

m
x xdt

(1.2)

t1
Z t2

d 1 2
( x ) dt
dt 2
t1
1
1
= mx 2 (t2 ) mx 2 (t1 )
2
2
T
=

where T 12 mx 2 is the kinetic energy. Ones sees that if F = V then we immediately


have
Z x2
W =
F dx
x1
Z x2
=
V dx
x1

= V (x1 ) V (x2 ) ,
5

(1.3)

CHAPTER 1. CLASSICAL MECHANICS

which is path independent.


In general the work done depends on the precise path taken from x(t1 ) to x(t2 ).
It would seem common-sense that to push a supermarket trolley from x(t1 ) to x(t2 )
requires an amount of work that is path-dependent - a path may be short or long, it
might traverse a hill or go around it - and one might expect the amount of work to
vary for each path. For many theoretical examples including these where work has to
be done against and by the force of gravity the work function is path-independent. An
example of a path-dependent work function is the work done against friction1
Whenever W is path-independent the force F is called conservative. If the force
only depends on positions, but not velocities, then it can always be derived from a scalar
field V , called the potential, as
F = V.

(1.4)

When F is conservative the work function W depends only on the values of V at the
endpoints of the path:
Z

t2

V x dt

W =
t1
Z t2

V dx V dy V dz
+
+
) dt
x dt
y dt
z dt

dV
) dt
dt

t1
t2

Z
=

t1

(1.5)

= (V (t2 ) V (t1 )).


In terms of kinetic energy we had W = T (t2 ) T (t1 ) hence,
T (t2 ) T (t1 ) = V (t1 ) V (t2 )

(T + V )(t1 ) = (T + V )(t2 ).

(1.6)

Hence a conservative force conserves energy E T + V over time.


In terms of the potential V , Newtons second law of motion (for a constant mass)
becomes:
V
i i V = m
xi
(1.7)
x
where xi are the components of the vector x (i.e. i {1, 2, 3}) and we have introduced

the notation i for x


i . This law of motion may be derived from a variational principle
2
on the functional
Z t2
S=
dtL
(1.8)
t1
1

You might consider the work done moving around a closed loop. For a conservative force the work
is zero (split the closed loop into two journeys from A to B and from B to A, as the work done by
a conservative force depends only on A and B we have WAB = TA TB = WBA , hence total work
around the loop equals WAB + WBA = 0). For work against a friction force there is positive contribution
around every leg of the journey to the work which does not vanish when summed.
2
A functional takes a function as its argument and returns a number. The action is a function of the
vectors x, x as well as the scalar time, t and returns a real-valued number.

1.1. LAGRANGIAN MECHANICS

called the action, where L is the Lagrangian. To each path the action assigns a number
using the Lagrangian.
x(t2)

x(t1)

(1.9)

You may recall from optics the principle of least time which is used to discover which
path a photon travels in moving from A to B. The path a photon takes when it is
diffracted as it moves between two media is dictated by this principle. The situation for
diffraction is analagous to the physicist on the beach who observes a drowning swimmer
out at sea. The physicist knows that she can travel faster on the sand than she can swim
so her optimal route will travel not in a straight line towards the swimmer but along a
line which minimises the journey to the swimmer. This line will be bent in the middle
and composed of two straight lines which change direction at the boundary between the
sand and the sea. How does she work out which path she should follow to get to the
swimmer in optimal time? Well she first derives a function which for each path to the
swimmer computes the time the path takes to travel. Then she considers the infinitude
of all possible paths to the swimmer and reads off from her function the time each path
will take. The path that takes the shortest time will extremise her function (as will
the longest time, if it exists), and she can find the quickest path to take in this way.
Of course the swimmer may not thank her for taking so long. In a similar manner the
action assigns a number to each motion a system may make, and the dynamical motion
is determined when the action is extremised. The action contains the Lagrangian which
is defined by
t) T V
L(x, x;
=

n
X
1
i=1

mi x 2i

n
X

Vi

(1.10)

i=1

for a system of n particles of masses mi with position vectors xi and velocities xi . Note
that here we are not referring to the ith component of a vector but rather the properties
of the ith particle. The equations of motion are found by extremising the action S. For
simplicity of notation we will consider only a one-particle system (i.e. n = 1),
Z

t2

S =

dt L
t1
Z t2

=
t1
t2

1
dt ( mx 2 V (x))
2

x V ((x))]
dt [mx

=
t1
Z t2

dt [mx

=
t1
t2

Z
=

dt [
t1

d
(x) i V xi ]
dt

d
(mxi ) i V ]xi + [xi mxi ]tt21
dt

(1.11)

CHAPTER 1. CLASSICAL MECHANICS

where we have used integration by parts in the final line. Under the variation the action
is expected to change at all orders:
S
x + O((x)2 ) S + S + O((x)2 )
x

S(x + x) = S(x) +

(1.12)

When the first order variation of S vanishes ( S


x = 0) the action is extremised. Each
path from x(t1 ) to x(t2 ) gives a different value of the action, and the extremisation
of the action occurs only for certain paths between the fixed points. From above we
see that when S = 0 (and noting that the endpoints of the path are fixed hence
x(t1 ) = x(t2 ) = 0) then
Z

t2

S =

dt [
t1

d
(mxi ) i V ]xi
dt

=0

(1.13)

for all xi . Which is satisfied only when Newtons law of motion is satisfied for the path
d
(mx i )). This is no coincidence as Lagranges
with components xi , (i.e. when i V = dt
equations may be derived from the Newtons second law.
More generally a generic dynamical system may be described by n generalised coordinates qi and n generalised velocities qi where i = 1, 2, 3, . . . n and n is the number of
independent degrees of freedom of the system. The choice of generalised coordinates is
where the art of dynamics resides. Imagine a system of N particles moving in a three
dimensional space V . There are 2 3N Cartesian coordinates and velocities which describe this system. Now suppose further that the particles are all constrained to move
on the surface of a sphere of radius R. One could make the change of coordinates to
spherical coordinates but for each particle the radial coordinate would be redundant
(since it is fixed to equal the spheres readius R) and the new coordinates would be
awash with trigonemtric functions. As the surface of the sphere is two-dimensional only
two coordinates on the surface of the sphere are needed to identify a unique position.
One reasonable choice is the angular variables and defined relative to the x-axis
and the z-axis for example. These are independent coordinates and are an example of
generalised coordinates. To summarise the example, each particle has three Cartesian
coordinates which must satisfy one constraint: the equation x2 + y 2 + z 2 = R2 , hence
there are only two generalised coordinates per particle which may be chosen as (, ).
The Lagrangian function is defined via Cartesian coordinates, but constraint equations allow one to rewrite the Largrangian in terms of qi and qi , i.e. L = L(qi , qi ; t). The
equations of motion for the system are the (Euler-)Lagrange equations:


L
d L

=0
dt qi
qi

(1.14)

Problem 1.1.1. Derive the Lagrange equations for an abstract Lagrangian L(qi , qi ) by
extremizing the action S.

1.1. LAGRANGIAN MECHANICS

Example 1: The free particle.


For a single free particle in R3 we have:
L=T V
1
= m(x 2 + y 2 + z 2 ) V
2

(1.15)
(1.16)

The generalised coordinates may be picked to be any n quantities which completely


paramaterise the resulting path of the particle, in this case Cartesian coordinates suffice
(i.e. let q1 x, q2 y, q3 z). The particle is not subject to a force, hence V = 0 and
hence the Lagrange equations (1.14) give
d
(mqi ) = 0
dt

(1.17)

i.e. that linear momentum is conerved.


Example 2: The linear harmonic oscillator.
The system has one coordinate, q, and the potential is V (q) = 21 kq 2 where k > 0 (n.b.
F = kq). The Lagrangian is
1
1
L = mq2 kq 2
2
2

(1.18)

and the equation of motion (1.14) gives


d
(mq)
+ kq = 0
dt
q =

(1.19)
k
q
m

Hence we find
q(t) = A cos(t) + B sin(t)

(1.20)

k
where
m is the frequency of oscillation and A and B are real constants. The
energy for these solutions are

1
1
E = q2 + kq 2
2
2
1 2 2
= k (A + B 2 )
2

(1.21)

Example 3: Circular motion.


Consider a bead of mass m constrained to move under gravity on a frictionless, circular,
immobile, rigid hoop of radius R such that the hoop lies in a vertical plane.
The Lagrangian formulation offers a neat way to ignore the forces of constraint
(which keep the bead attached to the hoop) via the use of generalised coordinates. If
the hoop rests in the xz-plane and is centred at z = R then the Cartesian coordinates
(in terms of a suitable chosen generalised coordinate q ) of the bead are:
x = R cos

x = R sin

y=0

y = 0

z = R + R sin

z = R cos

(1.22)

10

CHAPTER 1. CLASSICAL MECHANICS

These encode the statement that the bead is constrained to move on the hoop but
without needing to consider any of the forces acting to keep the bead on the hoop. The
Lagrangian is
1
L = m(x 2 + y 2 + z 2 ) V
2
1
= m(R2 2 ) mg(R sin + R)
2

(1.23)
(1.24)

where we have used the gravitational potential V = mgz( z V = mg FG ). The


equations of motion (1.14) are
d
mgR cos = 0
(mR2 )
dt
mR2 = mgR cos
 
g

=
cos
R
 
g
2
=
(1
+ O(4 ))
R
2

(1.25)

For << 1 we have Rg 12 ( Rg )t2 + At + B where A and B are real constants.


Obviously the assumption used for this approximation fails after a short time!

1.1.1

Conserved Quantities

For every ignorable coordinate in the Lagrangian there is an associated conserved quanL
= 0 then, as a consequence of 1.14,
tity. That is if L(qi , qi ; t) satisfies q
i


d L
=0
(1.26)
dt qi
and L
qi is conserved. This quantity is called the generalised momentum pi associated
to the generalised coordinate qi :
L
pi
.
(1.27)
qi
For example, consider free circular motion (set V=0 in the last example), where we have:
1
L = mR2 2 .
(1.28)
2
2
We observe that is an ignorable coordinate as L
= 0 and hence p = mR is
conserved. This is the conservation of angular momentum, as |r p| = p , as you may
confirm.

1.2

Noethers Theorem

Theorem 1.2.1. (Noether) To every continuous symmetry of an action there is an


associated conserved quantity.
Let us denote the action by SR [q] where
Z
SR [q]
dtL(q, q)

where

R = [t1 , t2 ].

There are two types of symmetry that we would like to consider,

(1.29)

1.2. NOETHERS THEOREM

11

(i.) Spatial: SR [q 0 ] = SR [q] and


(ii.) Space-time: SR0 [q 0 ] = SR [q].
These two types foreshadow the symmetries that appear in field theory where an
internal symmetry such as an SO(n) scalar symmetry rotates the Lagrangian into itself,
other types of symmetry of the action are called external. The spatial symmetries above
are a symmetry of the Lagrangian alone and would be the prototype of an internal
symmetry. We will consider Noethers theorem for a spatial symmetry, case (i) first and
find the associated conserved quantity (also called the conserved charge).
Case (i) means occurs if symmetry of the Lagrangian:
L[qi0 , qi0 ] = L[qi , qi ] .

(1.30)

qi qi0 = qi + i (q) qi + qi

(1.31)

where the symmetry acts as

In fact it all that is required is that we have a symmetry of the action so it is possible
that L is only invariant up to a boundary term:
L[qi0 , qi0 ] = L[qi , qi ] + 

dK
,
dt

(1.32)

for some expression K.


Now

X  L
L
L(qi + qi , qi + qi ) = L(qi , qi ) +
qi
+ qi
+ O(q)2 .
qi
qi

(1.33)

If the transformation qi qi0 is a symmetry then by definition L = K up to terms of


O(qi2 ), so that

X  L
L
d
+ i
= K
(1.34)
i
qi
qi
dt
i

The conserved quantity is explicitly given by


Q

X
i

L
K
qi

(1.35)

and all we need to do is compute:


dQ X L X d L dK
=

+
i

dt
qi
dt qi
dt
i
i
X L X L dK
=
i
+
i

qi
qi
dt
i

=0.

(1.36)

where we have used the equation of motion to get to the second line and (1.34) to get
the the third line.

12

CHAPTER 1. CLASSICAL MECHANICS

Next we turn to case (ii). In fact this can be treated in the same way by including
a correction to K. To see this note that, to lowest order,
Z t2
Z t2
Z t2 +t2
Z t1
0
SR0 =
L+
L +
L+
L
t1

t1

t2

t1 +t1

t2

L + L(t2 )t2 L(t1 )t1

= SR +
t1
Z t2

t2

L +

= SR +

t1

t1

d
(Lt)dt .
dt

(1.37)

Thus it is just as if K K + Lt
Example 1
Suppose that the spatial translation given by
qi qi0 = qi + ai

(1.38)

where ai is a constant shift in the ith generalised coordinate is a symmetry of the action.
Then we see that the conserved charge is
X L X
=
ai pi
(1.39)
Q=
ai
qi
i

where pi are the generalised momenta. The conserved quantity is a linear sum of the
generalised momenta which are all independently conserved.
Example 2
Suppose that the temporal translation is a symmetry of the action, i.e there is no explicit
time dependence so L = 0. Let the translation be
t t0 = t + 

(1.40)

where b is a constant. The coordinates shift as follows:


qi qi0 = qi (t + ) = qi + qi

i .e.

qi = qi

(1.41)

and similarly qi = 
qi . Following the discussion above the change in boundary conditions means that we need to use the corrected formula for the conserved quantity with
J =L
X L
Q=
qi L
qi
i
X
=
pi qi L
i

=H .

(1.42)

Thus for time translations the Hamiltonian is the conserved quantity.


Problem 1.2.1. The Lagrangian for a two-dimensional harmonic oscillator is
m 2
k
(x + y 2 ) (x2 + y 2 )
2
2
where x and y are Cartesian coordinates, x and y are their time-derivatives, m is the
mass of the oscillator and k is a constant.
L=

1.3. HAMILTONIAN MECHANICS

13

(a.) Rewrite the Lagrangian in terms of the complex coordinate z = x + iy, its complex
conjugate z and their time-derivatives.
(b.) Show that
z z 0 = ei z = z + iz + O( 2 )
is a symmetry of the Lagrangian.
(c.) Consider the infinitesimal version of the transformation given in part (b.) so that
z = iz. Find the conserved quantity Q associated to this transformation and
use the equations of motion to prove directly that its time-derivative dQ
dt is zero.

1.3

Hamiltonian Mechanics

Hamiltonians also encode the dynamics of a physical system. There is an invertible map
from a Lagrangian to a Hamiltonian so no information is lost. The map is the Legendre
transform and is used to define the Hamiltonian H:
H(qi , pi ; t) =

qi pi L

(1.43)

where
pi =

L
qi

(1.44)

is the conjugate momentum. N.B. The Hamiltonian is a function of qi and pi and


not qi and qi .. In particular we use this equation to solve for qi as a function of pi and
then we do not see qi again (except when we look at the time-evolution equations).
The Hamiltonian is closely related to the energy of the system. While the dynamics
of the Lagrangian system are described by a single point (q) in an n-dimensional vector
space called configuration space, the equivalent structure for Hamiltonian dynamics is
the 2n-dimensional phase space where a single point is described by the vector (q, p).
This is a little more than cosmetics as the equations of motion describing the two systems
differ. The Lagrangian has n second order differential equations describing the motion,
while the Hamiltonian system has 2n first order equations of motion. In both cases 2n
boundary conditions are required to completely solve the equations of motion.
Example.
P
Let L = i 21 qi2 V (q) then pi =
H=

L
qi

qi (mqi )

X1
i

X1
i

= mqi so that

mqi2 + V (q)

X p2
i
+ V (q).
=
2m
i

mqi2 + V (q)

(1.45)

14

1.3.1

CHAPTER 1. CLASSICAL MECHANICS

Hamiltons equations.

As H H(qi , pi ; t) then,
dH =

X  H
i

While as H =


H
H
dqi +
dpi +
dt .
qi
pi
t

L we also have

X
L
L
L
dqi
dqi
dqi pi + qi dpi
dt
dH =
qi
qi
t
i

X
L
L
qi dpi
dqi
=
dt
qi
t

(1.46)

i qi pi

(1.47)

where we have used the definition of the conjugate momentum pi = L


qi to eliminate
two terms in the final line. By comparing the coefficients of dqi , dqi and dt in the two
expressions for dH we find
qi =

H
,
pi

pi =

H
,
qi

H
L
=
t
t

(1.48)

L
. The first two of
where we have used Lagranges equation 1.14 to observe that pi = q
i
the above equations are usually referred to as Hamiltons equations of motion. Notice
that these are 2n first order differential equations compared to Lagranges equations
which are n second-order differential equations.

Example.
If
H=
then
q =

p
H
=
p
m

p2
+ V (q)
2m

and

p =

(1.49)
H
V
=
.
q
q

(1.50)

In other words we find, for this simple system, p = mq (the definition of linear momentum if q is a Cartesian coordinate) and F = V
q = p (Newtons second law).

1.3.2

Poisson Brackets

The Hamiltonian formulation of mechanics, while equivalent to the Lagrangian fomrulation, makes manifest a symmetry of the dynamical system. Notice that if we interchange
qi and pi in Hamiltons equations, the two equations are interchanged up to a minus sign.
This kind of skew-symmetry indicates that Hamiltonian dynamical systems possess a
symplectic structure and the phase space is related to a symplectic manifold Sp(2n) (see
the group theory chapter for the definition of the symplectic group). There is, consequently, a useful skew-symmetric structure that exists on the phase space. It is called
the Poisson bracket and is defined by

X  f g
g f

(1.51)
{f, g}
qi pi qi pi
i

where f = f (qi , pi ) and g = g(qi , pi ) are abitrary functions on phase space.

1.3. HAMILTONIAN MECHANICS

15

One can write the equations of motion using the Poisson bracket as
q = {qi , H} =

H
pi

and

p = {pi , H} =

H
.
qi

(1.52)

Being curious pattern-spotters we may wonder whether it is generally the case that
?
f = {f, H} for an arbitrary function f (qi , pi ) on phase space. It is indeed the case as

X  f H
H f

(1.53)
{f, H} =
qi pi
qi pi
i
X  f dqi dpi f 
=
+
qi dt
dt pi
i

df
=
dt
if f = f (qi , pi ).
The set of Poisson brackets acting on simply qi and pj are known as the fundamental
or canonical Poisson brackets. They have a simple form:
{qi , pj } = ij

(1.54)

{qi , qj } = 0
{pi , pj } = 0
which one may confirm by direct computation.

1.3.3

Duality and the Harmonic Oscillator

In string theory there are a number of surprising transformations called T-duality which
leave the theory unchanged but give a new interpretation to the setting. By T-duality
one observes that the theory is unchanged whether the fundamental distance is R or
1
3
R . This is a most unusual statement which you will learn more about elsewhere.
The prototype for duality transformations in a physical theory is the electromagnetic
duality which we will look at briefly after we have discussed special relativity and tensor
notation. The most simple duality transformation is exhibited in the harmonic oscillator.
We have seen that the Lagrangian and Hamiltonian of the harmonic oscillator are
1
1
L = mq2 kq 2
2
2
2
p
kq 2
H=
+
.
2m
2

(1.55)

The Hamilton equations are


q =

p
m

p = kq
 
k
q
q =
m

and

(1.56)

and these have the solution


r
q = A cos (t) + B sin (t)
3

where

k
.
m

(1.57)

If we were able to make such a transformation of the world we observe we would expect it to appear
very different - if we survived

16

CHAPTER 1. CLASSICAL MECHANICS

The solution is unchanged under the transformation


1 1
(m, k) ( , )
k m

(1.58)

q
1
as ( m
)k = . The transformation which we call a duality leaves the solution of
the equations of motion unchanged. However the Lagrangian is transformed as
L L0 =

q2
q2

2k 2m

(1.59)

and looks rather different. The Hamiltonian is transformed as


H H0 =

kp2
q2
+
2
2m

(1.60)

which up to a canonical transformation is identical to the original Hamiltonian H. The


precise canonical transformation is
q q0 = p

(1.61)

p p0 = q
which takes H 0 H. The transformation above is canonical as the Poisson brackets are
preserved: {q 0 , p0 } = {p, q} = 1. The Hamiltonian with dual parameters is canonically
equivalent to the original Hamiltonian. Investigation of dualities can be rewarding, for
example it is surprising to realise that the harmonic oscillator with large mass m and
large spring constant k is equivalent to the same system with small mass k1 and small
1
spring constant m
.

1.3.4

Noethers theorem in the Hamiltonian formulation.

Canonical transformations (qi qi0 , pi p0i ) are those transformations which preserve
the form of the equations of motion written in the transformed variables, i.e. under a
canonical transformation the equations of motion are transformed into
qi0 =

H(qi0 .p0i )
p0i

and

p0i =

H(qi0 .p0i )
.
qi0

(1.62)

A necessary and sufficient condition for a transformation to be canonical is that the


fundamental Poisson brackets are preserved under the transformation, i.e.
{qi0 , p0j } = ij ,

{qi0 , qj0 } = 0

and

{p0i , p0j } = 0.

(1.63)

In fact a canonical transformation may be generated by an arbitrary function f (qi , pi )


on phase space via
qi qi0 = qi + {qi , f } qi + qi

(1.64)

pi p0i = pi + {pi , f } pi + pi
Note that
f
pi
f
pi = {pi , f } =
qi
qi = {qi , f } =

(1.65)
(1.66)

1.3. HAMILTONIAN MECHANICS

17

In fact if  1 then the transformation is an infinitesimal canonical transformation.


It is easy to check that this preserves the fundamental Poisson brackets up to terms of
order O(2 ), e.g.
{qi0 , p0j } = {qi + {qi , f }, pj + {pj , f }}

(1.67)

= {qi , pj } + ({{qi , f }, pj } + {qi , {pj , f }}) + O(2 )


f
f
= {qi , pj } + ({
, pj } + {qi ,
}) + O(2 )
pi
qj

 2
2f
f

+ O(2 )
= ij +
qj pi pi qj
= ij + O(2 ).
If the infinitesimal canonical transformation generate by f is a symmetry of the Hamiltonian then H = 0 under the transformation. Now,

X  H
H
(1.68)
qi +
pi
H =
qi
pi
i

X  H f
H f
=

qi pi
pi qi
i

= {H, f }
=

df
dt

where we have assumed that f is an explicit function of the phase space variables and
not time, i.e. f
t = 0. Hence if the transformation is a symmetry H = 0 then f (qi , pi )
is a conserved quantity.

18

CHAPTER 1. CLASSICAL MECHANICS

Chapter 2

Special Relativity and


Component Notation
In 1905 Einstein published four papers which each changed the world. In the first he
established that energy occurs in discrete quanta, which since the work of Max Planck
had been thought to be a property of the energy transfer mecahnism rather than energy
itself - this work really opened the door for the development of quantum mechanics. In
his second paper Einstein used an analysis of brownian motion to establish the physical
existence of atoms. In his third and fourth papers he set out the special theory of
relativity and derived the most famous equation in physics, if not mathematics, relating
energy to rest mass E = mc2 . Hence 1905 is often referred to as Einsteins annus
mirabilis.
At the time Einsein had been refused a number of academic positions and was
working in the patent office in Bern. He was living with his wife and two young children
while he was writing these historic papers. Not only was he insightful but perhaps, more
importantly, he was dedicated and industrious. He must also have been pretty tired too.
In 1921 Einstein was awarded the Nobel prize for his work on the photoelectric effect
(the work in the first of his four papers that year) but special relativity was overlooked
(partly because it was very difficult to verify its predictions accurately at the time). If
there is any message to be taken from the decision of the Nobel committee it is probably
that you should keep your own counsel with regard to the quality of your work.
In this chapter we will give a brief description of the special theory of relativity - a
more complete description of the theory will require group theory and will be covered
again the group theory chapter. One consequence of relativity is that time and space
are put on equal footing and we will need to develop the notation we have used for
classical mechanics in which time was a special variable. Consequently we will spend
some time developing our notation and will also consider the component notation for
tensors. Sometimes a good notation is as good as a new idea.

2.1

The Special Theory of Relativity

The theory was constructed on two simple postulates:


(1.) the laws of physics are independent of the inertial reference frame of the observer,
19

20

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION


and

(2.) the speed of light is a constant for all observers.


Surprisingly these simple postulates necessitated that coordinate and time transformations between two different frames F and F 0 moving at relative speed v in the x-direction
were no longer the Gallilean transformation but rather the Lorentz transformations:
xv
)
c2
x0 = (x vt)
t0 = (t

(2.1)

y0 = y
z0 = z
where


r
v 2 1
1 2

.
c

(2.2)

Let us consider two thought experiments to motivate these transformations, the first will
demonstrate time dilation and the second the shortening of length. Consider a clock
formed of two perfect mirrors separated vertically such that a photon bouncing between
the mirrors takes one second to travel from the bottom mirror to the top mirror and
back again. It is consequently a very tall clock, it has height h = 2c metres where c is
= 149, 896, 229 metres in a vacuum!). Let us set
the speed of light (hence h 299792458
2
the clock in motion with a speed v in the +x-direction and consider two observers: one
in the rest frame of the clock F 0 and a second in a frame F and a second observer in
frame F 0 which moves at speed v along the x-axis. Suppose at time t = 0 the two clocks
are at the origin of frame F (i.e. the origin of both frames F and F 0 coincide at t = 0).
As the observer at the origin of frame F 0 moves off at speed v the observer in frame F
observes the ticking of the relatively moving photon clock slow down. Schematically
we indicate a view of the moving clock as seen from frame F 0 below:

h=c/2

x
The photon in the moving clock now is seen to move along the hypotenuse of a rightangled triangle as the clock moves horizontally. What are the dimensions of this triangle
as seen from frame F 0 ? The height is the same as the clock at 2c . As viewed from the
frame F 0 where the clock appears to be moving t0 seconds are observed to pass, in which
time the clocks base has moved a distance vt0 . Now using the Pythagorean formula and
the first postulate of special relativity (that the speed of light is a constant) we find that

2.1. THE SPECIAL THEORY OF RELATIVITY

21

the photon travels a distance x = ct0 where


r
0

ct = 2(

p
c2 v 2 t02
+
) = c2 + v 2 t02 .
4
4

(2.3)

Rearranging we find that, after one second has passed as measured in the rest frame of
the clock, that t0 seconds have passed as viewed from the frame F 0 in which the clock is
moving and

r
v2 0
1
1 2 t = t0 .
(2.4)
1=
c

We deduce that after t oscillations of the moving photon clock


ct0 =

c2 t2 + v 2 t02

t0 = t.

(2.5)

As 1 the time measured on a moving clock has slowed, because the same physical
process, namely the propagation of the light signal, has taken longer. This derivation of
time dilation is only a toy model as we assumed we could instantaneously know when
the photon on the moving clock had completed its oscillation. In practise the observer
would sit at the origin of frame F 0 and record measurements from there, information
would take time to be transported back to their frames origin and a second property of
special relativity would need to be considered, that of length contraction.
Let us consider a second toy model that will indicate length contraction as a consequence of the postulates of special relativity.
Suppose we construct a contraption, consisting of a straight rigid rod with a perfect
mirror attached to one end (as drawn below), whose rest length is l. We will aim to
measure its length using a photon, whose arrival and departure time we will suppose we
can measure accurately. The experiment will involve the photon traversing the length of
the rod, being reflected by the perfect mirror and returning to its starting point. When
conducted at rest the photon returns to its starting point in time t1 + t2 = 2lc , where t1
is the time to go to the mirror and t2 the time to come back, so in fact t1 = t2 . Now we
will change frames so that in F 0 the contraption is seen to be moving with speed v in
the positive x direction (left-to-right horizontally across the page as drawn below) and
repeat the experiment.

Photon of
speed c.

Perfect mirror.

Contraption of length L,
all moving at speed v.
Now we know that on the first leg of the journey the photon will take a longer time
to reach the mirror, as the mirror is traveling away from the photon. However on the
return leg the photons starting point at the other end of our contraption is moving
towards the photon. So we may wonder if the total journey time for the photon has
changed overall. We compute the time taken for each of the two legs. In the moving

22

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

frame
l0
cv
l0
ct02 = l0 vt02 t02 =
,
c+v

ct01 = l0 + vt01 t01 =

(2.6)
(2.7)

where l0 is the length that the moving observer sees. So the total time taken for the
photon to traverse twice the contraption length when it is moving at speed v is

 0
2l0 c
l
l0
2
0
0
= 2
t1 + t2 =
+
= l0 2 .
(2.8)
2
cv c+v
c v
c
On the other hand, using the Lorentz transformations for time between frames, we have
that
c t0 + t02
c
= l0 .
(2.9)
l = (t1 + t2 ) = 1
2
2
So the length that the moving observer will see is l0 = l/. As 1, l0 l. Thus the
length appears to have contracted in the moving frame.
Let us complete this thought experiment by bringing together time dilation and
length contraction to find the Lorentz transformations given in equation (2.1). Consider
an event occurring in the stationary event at spacetime point (t, x)1 The event is the
arrival taken of a photon having started at the origin at t = 0, i.e. x = ct. Observing
the same motion of a photon in the moving frame we deduce (as for the first leg in the
thought experiment used to derive length contraction):
x0 + vt0 = ct0

x0 = (c v)t0

(2.10)

Using the time dilation t0 = t gives


x0 = (ct vt) = (x vt)

(2.11)

since x = ct. As the speed of light is unchanged in either frame we have


using equation (2.11) we have
t0 = x0

x
t

t
vt2
vx
t
= (x vt) = (t
) = (t 2 )
x
x
x
c

x0
t0 ,

and

(2.12)

where we have used t = xc which is valid for photon motion. Thus we have arrived at
the Lorentz transformations of equation (2.1).
These simple thought experiments changed the world and demonstrate the possibility
for thought alone to outstrip intuition and experiment.
Problem 2.1.1. The Lagrangian of a relativistic particle with mass m and charge e
and coupled to an electromagnetic field is
L=

X
mc2
e(x, t) +
eAi (x, t)x i

where xi are the coordinates of the particle with i = 1, 2, 3, = (1 xc 2 ) 2 , x i is the


time derivative of the coordinate xi , (x, t) is the electric scalar potential and A(x, t) is
the magnetic vector potential.
1

We suppress the y and z coordinates as they are unchanged for a Lorentz transformation in the
x-direction only.

2.1. THE SPECIAL THEORY OF RELATIVITY

23

(a.) Show that the equations of motion may be written in vector form as


d
A
m x = e
e + x A.
dt
t
(b.) Find the Hamiltonian of the system.
(c.) Show that the rest energy of the system (i.e. when p = 0) is
mc2 +

2.1.1

1 e2 2
1
A + e + O( 2 ).
2m
c

The Lorentz Group and the Minkowski Inner Product.

As we will see in the chapter on group theory, the Lorentz transformations form a group
denoted O(1, 3). The subgroup of proper Lorentz transformations has determinant one
and is denoted SO(1, 3). When the Lorentz transformations are combined with the
translations in space and time the new larger group formed is called the Poincare group.
It is the relativistic analogue of the Gallilean group which map between inertial frames
in Newtonian mechanics2 . The Lorentz group O(1, 3) is defined by
O(1, 3) { GL(4, R)|T = ; diag(1, 1, 1, 1)}
GL(4, R) is the set of invertible four-by-four matrices whose entries are elements of R,
T is the transpose of the matrix and , the Minkowski metric, is a four-by-four
matrix whose diagonal elements are non-zero and given in full matrix notation by

1 0
0
0

0 1 0
0

(2.13)

0 0 1 0
0 0
0 1
It is not yet obvious that either the Lorentz transformations do form a group nor that
the definition of O(1, 3) encodes the Lorentz transformations as given in section 2.1. We
will wait until we encounter the definition of a group before checking the first assertion.
The group SO(1, 3) itself is the rotation group in a Minkowski space the numbers (1, 3)
indicate the signature of the spacetime and corresponds to a spacetime with one timelike
coordinate and three spatial coordinates or R1,3 . Rather more mathematically the matrix defines the signature of the Minkowski metric3 which is preserved by the Lorentz
transformations. It is the insightful observation that the Lorentz transformations leave
invariant the Minkowski inner product between two four vectors that will give the first
hint that Lorentz transformations are related to the definition of O(1, 3). The equivalent
2

The Gallilean group consists of 10 transformations: 3 space rotations, 3 space translations, 3


Gallilean velocity boosts v v + u and one time tranlsation.
3
We commence the abuse of our familiar mathematical definitions here as the Minkowski metric is not
positive definite as is implied by the definition of a metric, similarly the Minkowski inner product is also
not positive definite but the constructions of both Minkowski inner product and Minkowski metric are
close enough to the standard definitons that the misnomers have remained, and the lack of vocabulary
will not confuse our work. Properly Minkowski space is a pseudo-Riemannian manifold in contrast to
Euclidean space equipped with the standard metric which is a Riemannian manifold.

24

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

statement in Euclidean space R3 is that rotations leave distances unchanged. The inner
product on R1,3 is defined between any two four-vectors

v=

v0
v1
v2
v3

w=

and

w0
w1
w2
w3

(2.14)

in R1,3 by
< v, w > vT w

(2.15)

1 0
0
0

0 1 0
0
= (v 0 , v 1 , v 2 , v 3 )
0 0 1 0

0 0
0 1

w0
1
w

w2

w3

= v 0 w0 v 1 w1 v 2 w2 v 3 w3 .

(2.16)

(2.17)

Now we can see clearly that the Minkowski inner product < v, w > is not positive for
all vectors v and w.

Problem 2.1.2. Show that under the Lorentz transformation x2 x x is invariant,


where x0 = ct, x1 = x, x2 = y and x3 = z.

It is worthwhile keeping the comparison with R3 in mind. The equivalent group


would be SO(3) and its elements are the rotations in three-dimensional space, the inner
product on the space is defined using the identity matrix I whose diagonal entries
are all one and whose off-diagonal entries are zero. The Euclidean inner product on
R3 between two vectors x and y is xt Iy x1 y 1 + x2 y 2 + x3 y 3 . The vector length
squared x2 = xT Ix x x is positive definite when x 6= 0. The rotation of a vector
leaves invariant the length of any vector in the space, or in other words leaves the
inner product invariant. In the comparison with Lorentz transformations in Minkowski
space the crucial difference is that the metric is no longer positive definite and hence
four-vectors fall into one of three classes:

> 0 v is called timelike

< v, v > = 0 v is called lightlike or null .

< 0 v is called spacelike

(2.18)

Consider the subspace of R1,3 consisting of the x0 and the x1 axes. Vectors in this
two-dimensional sub-space are labelled by points which lie in one of, or at the meeting

2.1. THE SPECIAL THEORY OF RELATIVITY

25

points of, the four sectors indicated below:

Let

v=

v0
v1
0
0

(2.19)

be an arbitrary vector in R1,3 also lying entirely within R1,1 due to the zeroes in the the
third and fourth compoenents. So
< v, v >= (v 0 )2 (v 1 )2

(2.20)

and hence if
v0 > v1

v is timelike.

v0 = v1

v is lightlike or null.

v0 < v1

v is spacelike.

(2.21)

In relativity Minkowski space, R1,3 equipped with the Minkowski metric , is used to
model spacetime. Spacetime, which we have taken for granted so far, has a local basis of
coordinates which we are associated with time t and the Cartesian coordinates (x, y, z)
by
x0 = ct, x1 = x, x2 = y and x3 = z
(2.22)
where (x0 , x1 , x2 , x3 ) are the components of a four-vector x, c is the speed of light - a
useful constant that ensures that the dimenional units of x0 are metres, the same as x1 ,
x2 and x3 .
If we plot the graph of a one-dimensional (here x1 ) motion of a particle against
x0 = ct the resulting curve is called the worldline of the particle. We measure the

26

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

position x1 of the particle at a sequence of times and plot we might find a graph that
looks like:

What is the gradient of the worldline?


Gradient =

(ct)
c
= 1
1
(x )
v

(2.23)

where v 1 is the speed of the particle in the x1 direction. Hence if the particle moves
at the speed of light, c, then the gradient of the worldline is 1. In this case, when
x1 = v 1 t = ct (and recalling the particle is only moving in the x1 direction) then
x2 = (x0 )2 (x1 )2 = (ct)2 (x1 )2 = 0

(2.24)

so x is a lightlike or null vector. If the gradient of the worldline is greater than one then
v 1 < c and x is timelike, otherwise if the gradient is less than one then v 1 > c and x
is a spacelike vector. One of the consequences of the special theory of relativity is that
objects cannot cross the lightspeed barrier and objects with non-zero rest-mass cannot
be accelerated to the speed of light.
Problem 2.1.3. Compute the transformation of the space-time coordinates given by two
consecutive Lorentz boosts along the x-axis, the first with speed v and the second with
speed u.
Problem 2.1.4. Compare your answer to problem 2.1.3 to the single Lorentz transformation given by (u v) where denotes the relativistic addition of velocities. Hence
show that
u+v
.
uv =
1 + uv
c2
The spacetime at each point is split into four pieces. In the sketch above the set of null
vectors form the boundaries of the light-cone for the origin. Given any arbitrary point
in spcaetime p the set of vectors x p are all either timelike, spacelike or null. In the
diagram above this would correspond to shifting the origin to the point p, with spacetime
again split into four pieces and their boundaries. The points which are connected to
p by a timelike vector lie in the future or past lightcone of p, those connected by a
null vector lie on the surface lightcone of p and those connected by a spacelike vector
to p are outside the lightcone. As nothing may cross the lightspeed barrier any point

2.1. THE SPECIAL THEORY OF RELATIVITY

27

in spacetime can only exchange information with other points in spacetime which lie
within or on its past or future lightcone.
In the two-dimensional spacetime that we have sketched it would be proper to refer
to the forward or past light-triangle. The extension to four-dimensional spacetime is not
easy to visualise. First consider extending the picture to a three-dimensional spacetime:
add a second spatial axis x2 , as no spatial direction is singled out (there is a symmetry
in the two spatial coordinates) the light-triangle of two-dimensions extends by rotating
the the light-triangle around the temporal axis into the x2 direction4 . Rotating the
light-triangle through three-dimensions gives the light-cone. The full picture for fourdimensional spacetime (being four-dimensional) is not possible to visualise and we refer
still to the light-cone. However it is useful to be cautious when considering a drawing of
a light cone and understand which dimensions (and how many) it really represents, e.g.
a light-cone in four dimensions could be indicated by drawing a cone in three-dimensions
with the implicit understanding that each point in the cone represents a two-dimensional
space the drawing of which has been suppressed.
In all dimensions the lightcone is the cone at a point p is traced out by all the
lightlike vectors connected to p. No spacelike separated points can exchange a signal
since the message would have to travel at a speed exceeding that of light.
We finish this section by making an observation that will make the connection between the definition of O(1, 3) and the Lorentz transformatons explicit. But which will
be most usefully digested a second time after having read through the group theory
chapter. Consider again the Lorentz boost transformation shown in equation (2.1).
By making the substitution = cosh the transformations are re-written in a way
that looks a little like a rotation, it is in fact a hyperolic rotation. We note that
cosh2 sinh2 = 1 = 2 sinh2 , i.e. sinh2 = 2 1, therefore we have the
useful relation
tanh =

1
1 1
v2 1
v
1 2
( 1) 2 = (1 2 ) 2 = (1 (1 2 )) 2 = .

c
c

Hence we can rewrite the Lorentz boost in (2.1) as




x
ct0 = c cosh t tanh = ct cosh x sinh
c


0
x = cosh x ct tanh = x cosh ct sinh

(2.26)
(2.27)

y0 = y

(2.28)

z0 = z

(2.29)

or in matrix form as


ct0
cosh sinh
0
x sinh cosh

x0
y0 =
0
0


0
z
0
0
4

(2.25)

0
0
1
0

0
0
0
1

ct
x
y
z

= ()x

(2.30)

By taking a slice of the three dimensional graph through ct and perpendicular to the (x1 , x2 ) plane
the two-dimensional light-triangle structure reappear.

28

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

where is the four-by-four matrix indicated above and is a group element of SO(1, 3).
The Lorentz boost is a hyberbolic rotation of x into ct and vice-versa.
Problem 2.1.5. Show that () SO(1, 3).

2.2

Component Notation.

We have introduced the concept of the position four-vector implicitly as the extension of
the usual three-vector in Cartesian coordinates to include a temporal coordinate. The
position four vector is a particular four-vector x which specifies a unique position in
space-time:

ct

.
x=
(2.31)

z
The components of the postion four-vector are denoted x where {0, 1, 2, 3} such
that
x0 = ct,

x1 = x,

x2 = y

and x3 = z.

(2.32)

It is frequently more useful to work with the components of the vector x rather than
the abstract vector x or the column vector in full. Consequently we will now develop
a formalism for denoting vectors, their transposes, matrices, matrix multiplication and
matrix action on vectors all in terms of component notation.
The notation x with a single raised index we have defined to mean the entries in a
single-column vector, hence the raised index denotes a row number (the components of
a vector are labelled by their row). We have already met the Minkowski inner product
which may be used to find the length-squared of a four-vector: it maps a pair of vectors
to a single scalar. Now a scalar object needs no index notation it is specified by a single
number, i.e.
< x, x >= x2 = (x0 )2 (x1 )2 (x2 )2 (x3 )2 .

(2.33)

On the right-hand-side we see the distribution of the components of the vector. Our
aim is to develop a notation that is useful, intuitive and carries some meaning within
it. A good notation will improve our computation. We propose to develop a notation
so that
x2 = x x

(2.34)

where x is a row vector, although not always the simple transpose of x. To do this
we will develop matrix multiplication and the Einstein summation convention in the
component notation.

2.2.1

Matrices and Matrix Multiplication.

Let us think gently about index notation and develop our component notation. Let A be
an invertible four-by-four matrix with real entries (i.e. A GL(4, R)). The matrix may
multiply the four-vector x to give a new four-vector x0 . This means that in component
notation matrix multiplication takes the component x to x0 , i.e. x0 = Ax. In terms

2.2. COMPONENT NOTATION.

29

of components we write the matrix entry for the th row and th column by A and
matrix multiplication is written as
x0 =

A x .

(2.35)

This notation for matrix multiplication is consistent with our notation for a column
vector x and row vector x : raised indices indicate a row number while lowered indices
indicate a column number. Hence the summation above is a sum of a product of entries
in a row of the matrix and column of the vector - as the summation index is a
column label (the matrix row stays constant in the sum). The special feature we have
developped here is to distinguish the meaning of a raised and lowered index, otherwise
teh expressions above are very familiar.
In more involved computations it becomes onerous to write out multiple summation
symbols. So we adopt in most cases the Einstein summation convention, so called
because it was notably adopted by Einstein in a 1916 paper on general relativity. As can
be seen above the summation occurs over a pair of repeated indices, so it is not necessary
to use the summation sign. Instead the Einstein summation convention assumes that
there is an implicit summation over any pair of repeated indices in an expression. Hence
the matrix multiplication written above becomes
x0 = A x

(2.36)

when the Einstein summation convention is assumed. In four dimensions this means
explcitly
x0 = A x = A 0 x0 + A 1 x1 + A 2 x2 + A 3 x3 .
(2.37)
The summed over indices no longer play any role on the right hand side and the index
structure matches on either side of the expression: on both sides there is one free
raised index indiciating that we have the components of a vector on both sides of the
equality. The repeated pair of indices which will be summed over and missing from the
final expression are called dummy-indices. It does not matter which symbol is used to
denote a pair of indices to be summed over as they will vanish in the final expression,
that is
x0 = A x = A x = A x = A 0 x0 + A 1 x1 + A 2 x2 + A 3 x3 .

(2.38)

The index notation we have adopted is useful as free indices are matched on either side
as are the positions of the indices.
So far so good, now we will run into an oddity in our conventions: the Minkowski
metric does not have the index structure of a matrix in our conventions, even thought we
wrote as a matrix previously! Recall that we aimed to be able to write x2 = x x . Now
we understand the meaning of the right-hand-side, applying the Einstein summation
convention we have
x x = x0 x0 + x1 x1 + x2 x2 + x3 x3
(2.39)
but we have seen already that the Minkowski inner product is
< x, x >= (x0 )2 (x1 )2 (x2 )2 (x3 )2

(2.40)

30

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

so we gather that x0 = x0 , x1 = x1 , x2 = x2 and x3 = x3 and as we hinted x is not


simply the components of the transpose of x. It is the Minkwoski metric on Minkowski
space that we may use to lower indices on vectors:
x x .

(2.41)

This is the analogue of vector transpose in Euclidean space (where the natural inner
product is the identity matrix ij and the transpose does not change the sign of the
components as xi = ij xj . Now we note the flaw in our notation, as can lower indices
then we could form an object A = A which is obviously related to a matrix A .
So we write as a matrix

1 0
0
0

0 1 0
0

=
(2.42)

0 0 1 0
0 0
0 1
we are forced to defy our own conventions and understand to mean the entry in the
th row and th column of the matrix above.
Now we can write the Minkowski inner product in component notation:
x x = x x = x x = (x0 )2 (x1 )2 (x2 )2 (x3 )2 =< x, x > .

(2.43)

The transpose has generalised to the raising and lowering of indices using the Minkowski
metric (x )T = x = x . To raise indices we use the inverse Minkowski metric
denoted and defined by
=
(2.44)
which is the component form of 1 = I. From the matrix form of we note that
1 = . We can raise indices with the inverse Minkowski metric: x = x .
Exercise Show that the matrix multiplication T = used to define the matrices
O(1, 3) in component notation may be written as = .
Solution
(T ) =
=
=
=
=
=
=
where we have used the Minkowski metric to take the matrix transpose.
Since the components of vectors and matrices are numbers the order of terms in products
is irrelevant in component notation e.g.
x = x

2.2. COMPONENT NOTATION.

31

or
A x = (xT )A = x A .
We are also free to raise and lower simultaneously pairs of dummy indices:
x x = x x = x x = x x .
So we have many ways to write the same expression, but the key point for us are the
things that do not vary: the objects involved in the expression (x and A below) and the
free indices (although the dummy indices may be redistributed):
x T A = x A
= x A
= A x
= A x
= A x
= A x

2.2.2

Common Four-Vectors

We have seen that the Minkwoski inner product gives a Lorentz-invariant quantity for
any pair of four-vectors. We can make use of this Lorentz invariance to construct new
but familiar four-vectors. Consider two events, one occurring at the 4-vector x and
another at y where

ct1
ct2

x1
x2

x=
(2.45)
and y = y .
y
1
2

z1
z2
In Newtonian physics the difference in the time t |t2 t1 | the two events
qP occurred at
3
i
i 2
and the distance in space between the locations of the two events r
i=1 |x y |
are both invariants of the Gallilean transformations. As we have seen, under the Lorentz
transformations a new single invariant emerges: |x y|2 = c2 xy where xy is called
the proper time between two events x and y, i.e.
2
c2 xy
= c2 (t2 t1 )2 (x2 x1 )2 (y2 y1 )2 (z2 z1 )2 .

(2.46)

Every point x in space-time has a proper-time associated to it by


c2 x2 = c2 t21 x21 y12 z12 = x x

(2.47)

We have already shown in problem 2.1.2 that this is invariant under the under the
2 =< x
Lorentz transformations and one can show that xy is also invariant as c2 xy
y, x y >= (x y) (x y) . Now as < x y, x y >= x2 2 < x, y > +y2 is invariant
then we can conlude that < x, y > is also an invariant as x2 and y2 are also invariant
under the Lorentz transformations.
Problem 2.2.1. Show explicitly that < x, y >= x y is invariant under the Lorentz
group.

32

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

These quantities are all called Lorentz-invariant quantities. You will notice that they
do not have any free indices for the Lorentz group to act on.
All four-vectors transform in the same way as the position four-vector x under a
Lorentz transformation (just as 3D vectors all transform in the same way under SO(3)
rotations). We can find other physically relevant four-vectors by combining the position
four-vector x with Lorentz invariant quantities. For example the Lorentz four-velocity
u is defined using the proper time, which is Lorentz invariant, rather than time which
is not:

c
1

dx dt
dt
dx
u
=
=
u=
(2.48)
2
d
dt d
d
u
u3

u1

dt
, starting
where u2 is the usual Newtonian velocity vector in R3 . Let us compute d
u3

from
=

1p 2 2
c t x2 y 2 z 2
c

(2.49)

then
1
d
= 2 (2c2 t 2xu1 2yu2 2zu3 )
dt
2c
2
1
3
(t xu
yu
zu
)
c2
c2
c2
=

u2
t(1 c2 )
=

= 2

(2.50)

= 1
where u2 = (u1 )2 + (u2 )2 + (u3 )2 and

q
1

u2
.
c2

u=

c
u1
u2
u3

Hence the four velocity is given by

(2.51)

We can check that u2 is invariant:


u2 = u u = 2 (c2 u2 ) = c2 2 (1

u2
) = c2
c2

(2.52)

The four-momentum is defined as p = mu where m is the rest-mass. The spatial part


of the four-momentum is the usual Newtonian momentum pN multiplied by , while the
zeroth component is proportional to energy:
p0 =

E
= mc.
c

(2.53)

The invariant quantity associated to p is


p p = (

E 2
) 2 p2N = m2 c2
c

(2.54)

2.2. COMPONENT NOTATION.

33

Rearranging gives
1

E = (m2 c4 + 2 p2N c2 ) 2

(2.55)

which is the relativistic version of E = 12 mu2 and you could expand the above expression
to find the usual kinetic energy term together with other less familiar terms. For a
particle at rest we have = 1 and pN = 0 hence we find a particles rest energy E0 is
E0 = mc2 .

2.2.3

(2.56)

Classical Field Theory

In the first chapter we studied Lagrangians and Hamiltonians of systems with a finite
(or at least discrete number of degrees of freedom) which we labelled by qi (t). But in
modern physics, starting with Maxwell (did we mention yet that he was at Kings probably), one thinks that space is filled with fields that the move in time. A field is
a function (x, y, z, t) that takes values in some space (usually a real or complex vector
space). It may also carry a Lorentz index. The field is all around us and is allowed to
fluctuate according some dynamical rule. The prime example is the electromagnetic field
A that we will discuss in detail next. One can think of a field a continuous collection
of degrees of freedom qi (t) - one at each spacetime point. Then roughly speaking
Z
X
d3 x
(2.57)
i

The action principle based on a Lagrangian is now lifted to one based on a Lagrangiandensity:
Z
S = d4 xL(I , I )
(2.58)
which depends on the fields I and their first derivatives along any of the spacetime
dimensions. Here I is an index like i was that allows us to consider theories with more
than one field In a relativistic theory we require that L is Lorentz invariant. If so the
equation of motion that come from extemizing the action will be Lorentz covariant.
Problem 2.2.2. Show that the principle of least action leads to the Euler-Lagrange
equations


L
L

= 0.
(2.59)

I
I
To do this one must assume that the fields all vanish sufficiently quickly at spatial infinity.
We can again consider infinitessimal symmetries of the form
I 0I = I + I
I 0I = I +  I

(2.60)

where I is allowed to depend on the fields. A Lagrangian density is invariant if


L(0I , 0I ) = L(I , I ) + K

(2.61)

where K is some expression involving the fields. In this case the conserved Noether
charge becomes a conserved current J defined by
X L
J =
I K
(2.62)
I
I

34

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

Problem 2.2.3. Show that, if I 0I is a symmetry and the equation of motion are
satisfied then J is conserved in the sense that
J = 0

Given a conserved current we can construct a conserved charge by taking


Z
Q = d3 xJ 0

(2.63)

(2.64)

It then follows that


Z

d3 x0 J 0

d3 x J

d2 xJ dS

0 Q =
=
=
=0

(2.65)

where a bold face indicates the spatial components of a vector and dS is the volume
element of the 2-sphere at spatial infinity. To obtain the final line we assume that the
fields all vanish at infinity.
One can think of the Lagrangian as
Z
L = d3 xL
(2.66)
And similarly one can consider a Hamiltonian density
X
H=
I 0 I L

(2.67)

where
I =
so that the Hamiltonian is

Z
H=

L
0 I
d3 xH

(2.68)

(2.69)

Problem 2.2.4. Consider the action for a massless, real scalar field with a quartic
potential in Minkowksi space-time:


Z
Z
1
4
4

4
S = d xL = d x
2
where R is a constant. Under a conformal transformation the field transforms as
0 +x + where is the infinitesimal parameter for the transformation.
(d.) Show that the variatation of the Lagrangian under the conformal transformation
is given by (upto order 2 ):
L L + (x L).
(e.) Hence show that there is an associated conserved quantity
j (x + ) x L.
(f.) Find the equation of motion for and use this to show explicitly that j = 0.

2.2. COMPONENT NOTATION.

2.2.4

35

Maxwells Equations.

The first clue that there was a democracy between time and space came with the discovery of Maxwells equations. James Clerk Maxwells work that led to his equations began
in his 1861 paper On lines of physical force which was written while he was at Kings
College London (1860-1865). The equations include an invariant speed of propagation
for electromagnetic waves c, the speed of light, which is one of the two assumptions in
Einsteins special theory of relativity. Consequently they have an elegant formulation
when written in terms of Lorentz tensors.
Half of Maxwells equations can be solved by introducing an electrostatic potential
and vector magnetic potential A, both of which depend on space and time. One then
writes the electric and magnetic fields as:

E=A
B=A .

(2.70)

Note that and A are not uniquely determined by E and B. Given any pair and A
we can also take
0 =
A0 = A .

(2.71)

and one finds the same E and B. Here is any function of space and time. Such a
symmetry is called a gauge symmetry. We can put these together to form a 4-vector:
A = (, A) .

(2.72)

A0 = A .

(2.73)

In this case the gauge symmetry is

The fact that one may arbitrarily shift the potential A in this way without changing L
is an example of a gauge symmetry. These symmetries are a pivotal part of the standard
model of particle physics and this U (1) gauge symmetry of electromagnetism is the
prototypical example of gauge symmetry.
We want to derive Maxwells theory of electromagnetism from a relativistic invariant
action S given by
Z
S=

d4 x L

(2.74)

where L is call a Lagrangian density. We have two requirements on L. Firstly it needs to


be a Lorentz scalar. This means that all , indices must be appropriately contracted.
Secondly it should be invariant under (2.73).
To start we note that
F = A A
(2.75)
is invariant under (2.73).
Problem 2.2.5. Show that the transformation
A A
where is an arbitrary function of x leaves the F invariant.

(2.76)

36

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

Thus we can construct our action using Lorentz invariant combinations of F and
. Let us expand in powers of F :
1
L = F F F + . . .
4

(2.77)

The first term is zero since is symmetric but F is anti-symmetric. So we take


1
L = F F
4

(2.78)

We would like to use the action above to find the equations of motion but we are
immediately at a loss if we attempt to write Lagranges equations. The problem is we
have put space and time on an equal footing in relativity, and in the above action, while
in Lagrangian mechanics the temporal derivative plays a special role and is distinguished
from the spatial derivative. Lagranges equations are not covariant. We will return to
this problem and address how to upgrade Lagranges equations to space-time. Here we
will vary the fields A in the action directly and read off the equation of motion. To
simplify the expressions we begin by writing the variation of the Lagrangian:
1
1
A L = A (F )F F A (F )
4
4
1

= A (F )F
2

(2.79)
(2.80)

Now under a variation of A the field strength F transforms as


F (A + A ) (A + A ) F + A (F )

(2.81)

A (F ) = (A ) (A ).

(2.82)

so we read off

So from the variation of the Lagrangian we have:


1
1
A L = A (F )F F A (F )
4
4

1
= (A ) (A ) F
2
= (A )F

(2.83)
(2.84)
(2.85)

where we have used the antisymmetry of F = F and a relabelling of the dummy


indices in the second term of the second line to arrive at the final expression. To take
the derivative off of A we use the same technique as when one integrates by parts
(although here there is no integral, but when we put the Lagrangian variation back into
the action there will be) namely we rewrite the expression using the observation that
(A F ) = (A )F + A (F )

(2.86)

A L = (A F ) + A (F ).

(2.87)

to give

Returning to the action we have




Z
A S = d4 x (A F ) + A (F ) .

(2.88)

2.2. COMPONENT NOTATION.

37

The first term we can integrate diretl - it is called a boundary term as it is a total
derivative - but it vanishes as the term A vanishes at the fixed points of the path (in
field space) we are varying leaving us with
Z
0 = A S = d4 xA (F ).
(2.89)
Hence the field equation is
F = 0.

(2.90)

We could consider adding in a source term. Suppose that we have some background
electromagnetic current j . Then we could add to the Lagrangian the term
Lsource = j A .

(2.91)

Note that this is not gauge invariant in general but one has, under (2.73),
L0source = Lsource j
= Lsource + j (j ) .

(2.92)

The last term is a total derivative and can be dropped. Therefore the source term leads
to a gauge invariant action if j is a conserved current:
j = 0 .

(2.93)

Taking the variation of the source term in action with respect to A is easy any simply
changes the equation of motion to
F = j .

(2.94)

Note that the conservation equation also follows from the equation of motion since
j = F = 0, where again weve used the fact that the derivatives are symmetric
but F is anti-symmetric.
This is a space-time equation. If we split it up into spatial and temporal components
we can reconstruct Maxwells equations in their familiar form. To do this we introduce
the electric E and magnetic B fields in terms of components of the field strength:
F 0i = E i

and F ij = ijk B k

(2.95)

where E i and B i are the components of E and B respectively, i, j, k {1, 2, 3} and ijk
is the Levi-Civita symbol normalised such that 123 = 1. We will meet the Levi-Civita
symbol when we study tensor representations in group theory, at this point it is sufficient
to know that it has six components which take the values:
123 = 1,
213

= 1,

231 = 1,
132

312 = 1

= 1,

321

(2.96)

= 1

note that swapping of any neighbouring indices changes the sign of the Levi-Civita
symbol - the Levi-Civita symbol is an antisymmetric tensor. We will split the equation

38

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

of motion in equation (2.90) into its temporal part = 0 and its spatial part = i
where i {1, 2, 3}. Taking = 0 we have
0 F 00 + i F i0 = i E i = j 0

(2.97)

E = j0

(2.98)

that is

From the spatial equations ( = i) we have


1
0 F 0i + j F ji = 0 E i + j (jik B k ) = t E i ijk j (B k ) = j i
c

(2.99)

i.e.

1 E
j.
(2.100)
c t
That is all we obtain from the equation of motion, so we seem to be two equations short!
However there is an identity that is valid on the field strength simply due to its definition.
Formerly F is an exact form as it is the exterior derivative of the one-form A 5 .
Exact forms vanish when their exterior derivative, which is the antisymmetrised partial
derivative, is taken.
B=

Problem 2.2.6. Show that


3[ F] F + F + F = 0

(2.101)

The identity [ F] = 0 is called the Bianchi identity for the field strength and is a
consequence of its antisymmetric construction. However it is non-trivial and it is from
the Bianchi identity for F that the remaining two Maxwell equations emerge.
Let us consider all the non-trivial spatial and temporal components of [ F] =
0. We note that we cannot have more than one temporal index before the identity
trivialises, e.g. let = = 0 and = i then we have
0 F0i + 0 Fi0 + i F00 = 0 F0i 0 F0i = 0

(2.102)

from which we learn nothing. When we take = 0, = i and = j we have


0 Fij + i Fj0 + j F0i = 0

(2.103)

We must use the Minkowski metric to find the components F of the field strength in
terms of E and B:
Fij = i j F = ik jl F kl = F ij = ijk B k

(2.104)

F0i = 0 i F = ik F 0k = F 0i = E i .

(2.105)

Substituting these expressions into equation (2.103) gives


0 (ijk B k ) + i E j j E i = 0.

(2.106)

To reformulate this in a more familiar way we can make use of an identity on the
Levi-Civita symbol:
k
ijm ijk = 2m
.
(2.107)
5

Differential forms are a subset of the tensors whose indices are antisymmetric. They are introduced
and studied in depth in the Manifolds course.

2.2. COMPONENT NOTATION.

39

k.
Problem 2.2.7. Prove that ijm ijk = 2m

Contracting ijm with equation (2.106) gives


ijm 0 (ijk B k ) + ijm i E j ijm j E i = 20 (B m ) + ijm i E j ijm j E i
m

(2.108)

= 20 (B ) + 2ijm i E = 0
which we recognise as
1 B
.
(2.109)
c t
The final Maxwell equation comes from setting = i, = j and = k in equation
(2.101):
E=

i Fjk + j Fki + k Fij = i (jkl B l ) + j (kil B l ) + k (ijl B l ) = 0

(2.110)

Contracting this with ijk gives




jkl l
kil l
ijl l
ijk i ( B ) + j ( B ) + k ( B ) = i (2il B l ) + j (2jl B l ) + k (2kl B l )
(2.111)
= 6i B i
=0
That is,
B = 0.

(2.112)

Indeed the whole point of introducing A = (, A) was to ensure that (2.109) and
(2.112) were automatically solved. So thats it, we have recovered Maxwells theory of
electromagnetism from simple symmetry reasoning and Lorentz invariance.

2.2.5

Electromagnetic Duality

The action for electromagnetism can be rewritten in terms of E and B where it has a
very simple form. Now
F F = F0 F 0 + Fi F i

(2.113)

= F00 F 00 + F0i F 0i + Fi0 F i0 + Fij F ij

(2.114)

= 2E i E i + ijk B k ijl B l

(2.115)

= 2E i E i + 2B i B i

(2.116)

= 2E2 + 2B2 .

(2.117)

Hence,
1
(2.118)
L = (E2 B2 )
2
Some symmetry is apparent in the form of the Lagrangian and the equations of motion.
We notice (after some reflection) that if we interchange E B and B E that while
the Lagrangian changes sign, the equations of motion are unaltered. This is electromagnetic duality: an ability to swap electric fields for magnetic fields while preserving
Maxwells equations6 .
6

The eagle-eyed reader will notice that the electromagnetic duality transformation exchanges equations of motion for Bianhci identities.

40

CHAPTER 2. SPECIAL RELATIVITY AND COMPONENT NOTATION

As with the harmonic oscillator, electromagnetic duality is much more apparent in


the associated Hamiltonian which takes the form
1
H = (E2 + B2 )
2
which is itself invariant under (E, B) (B, E).

(2.119)

Chapter 3

Quantum Mechanics
Historically quantum mechanics was constructed rather than logically developed. The
mathematical procedure of quantisation was later rigorously developed by mathematicians and physicists, for example by Weyl; Kohn and Nirenberg; Becchi, Rouet, Stora
and Tyutin (BRST quantisation for quantising a field theory); Batalin and Vilkovisky
(BV field-antifield formalism) as well as many other significant contributions and research into quantisation methods continues to this day. The original development of
quantum mechanics due to Heisenberg is called the canonical quantisation and it is the
approach we will follow here.
Atomic spectra are particular to specific elements, they are the fingerprints of atomic
forensics. An atomic spectrum is produced by bathing atoms in a continuous spectrum
of electromagnetic radiation. The electrons in the atom make only discrete jumps as
the electromagnetic energy is absorbed. This can be seen in the atomic spectra by the
absence of specific frequencies in the outgoing radiation and by recalling that E = h
where E is energy, h is Plancks constant and is the frequency.
In 1925 Heisenberg was working with Born in Gottingen. He was contemplating the
atomic spectra of hydrogen but not making much headway and he developed the most
famous bout of hayfever in theoretical physics. Complaining to Born he was granted
a two-week holiday and escaped the pollen-filled inland air for the island of Helgoland.
He continued to work and there in a systematic fashion. He arranged all the known
frequencies for the spectral lines of hydrogen into an array, or matrix, of frequencies ij .
He was also able to write out matrices of numbers corresponding to the transition rates
between energy levels. Armed with this organisation of the data, but with no knowledge
of matrices, Heisenberg developed a correspondence between the harmonic oscillator
and the idea of an electron orbitting in an extremely eccentric orbit. Having arrived
at a consistent theory of observable quanitites, Heisenberg climbed a rock overlooking
the sea and watched the sun rise in a moment of triumph. Heisenbergs triumph was
short-lived as he quickly realised that his theory was based around non-commuting
variables. One can imagine his shock realising that everything worked so long as the
multiplication was non-Abelian, nevertheless Heisenberg persisted with his ideas. It was
soon pointed out to him by Born that the theory would be consistent if the variables
were matrices, to which Heisenberg replied that I do not even know what a matrix
is. The oddity that matrices were seen as an unusual mathematical formalism and not
41

42

CHAPTER 3. QUANTUM MECHANICS

a natural setting for physics played an important part in the development of quantum
mechanics. As we will see a wave equation describing the quantum theory was developed
by Schrodinger in apparent competition to Heisenbergs formulation. This was, in part,
a reaction to the appearance of matrices in the fundamental theory as well as a rejection
of the discontinuities inherent in Heisenbergs quantum mechanics. Physicists much
more readily adopted Schr
odingers wave equation which was written in the language
of differential operators with which physicists were much more familiar. In this chapter
we will consider both the Heisenberg and Schrodinger pictures and we will see the
equivalence of the two approaches.

3.1

Canonical Quantisation

We commence by recalling the structures used in classical mechanics. Consider a classical


system described by n generalised coordinates qi of mass mi subject to a potential V (qi )
and described by the Lagrangian
L=

n
X
1
i=1

mi qi2

n
X

V (qi )

(3.1)

i=1

where V (q) = V (q1 , q2 , . . . qn ). The equations of motion are:


mi qi +
The Hamiltonian is
H=

V
=0
qi
n
X

pi qi L =

i=1

Fi = mi qi .

p2i
+ V (q)
2mi

(3.2)

(3.3)

and the Hamiltonian equations make explicit that there exists a natural antisymmetric
(symplectic) structure on the phase space, the Poisson brackets:
{qi , pj } = ij

(3.4)

with all other brackets being trivial.


Canonical quantisation is the promotion of the positions qi and momenta pi to operators (which we denote with a hat):
(qi , pi ) (
qi , pi )

(3.5)

together with the promotion of the Poisson bracket to the commutator by


{A, B}

1
[A, B]
i~

(3.6)

are operators.
where A and B indicate arbitrary functions on phase space, while A and B
For example we have
[
qi , pj ] = i~ ij
(3.7)
h
where ~ 2
and h is Plancks constant. In particular the classical Hamiltonian becomes
under this promotion
n
X
X
p2i
=
H H
+
V (
qi ).
(3.8)
2mi
i=1

3.1. CANONICAL QUANTISATION

43

While the classical qi and pi collect to form vectors in phase space, the quantum operators qi and pi belong to a Hilbert space. In quantum mechanics physical observables
are represented by operators which act on the Hilbert space of quantum states. The
states include eigenstates for the operators and the corresponding eigenvalue represents
the value of a measurement. For example we might denote a position eigenstate with
eigenvalue q for the position operator q by |qi so that:
q|qi = q|qi

(3.9)

we will meet the bra-ket notation more formally later on, but it is customary to label
an eigenstate by its eigenvalue hence the eigenstate is denoted |qi here. More general
states are formed from superpositions of eigenstates e.g.
Z
X
|i = dx(x)|xi
or
|i =
i |qi i
(3.10)
i

where we have taken |xi as a continuous basis for the Hilbert space while |qi i is a discrete
basis.
If we work using the eigenfunctions of the positon operator as a basis for the Hilbert
space it is customary to refer to states in the position space. By expressing states as a
superposition of position eigenfunctions we determine an expression for the momentum
operator in the position space. For simplicity, consider a single particle state described
by a single coordinate given by = c(q)|qi, where |qi is the eigenstate of the position
operator q and q = q. The commutator relation [
q , p] = i~ fixes the momentum
operator to be

(3.11)
p = i~
q
as
[
q , p] = (
q p pq)c|qi

(3.12)

= qpc|qi pqc|qi
= i~
q

c
(qc)
|qi + i~
|qi
q
q

= i~
For many-particle systems we may take the position eigenstates as a basis for the Hilbert
space and the state and momentum operator generalise to

ci (q)|qi i

pi i~

and

.
qi

(3.13)

Note that the Hamiltonian operator in the position space becomes


=
H

X
i

3.1.1

X
~2 2
+
V (
qi ).
2mi qi2

(3.14)

The Hilbert Space and Observables.

Definition A Hilbert space H is a complex vector space equipped with an inner product
< , > satisfying:

44

CHAPTER 3. QUANTUM MECHANICS

(i.) < , >= < , >


(ii.) < , a1 1 + a2 2 >= a1 < , 1 > +a2 < , 2 >
(iii.) < , > 0

H where equality holds only if = 0.

where indicates the complex conjugate of


Note that as the inner product is linear in its second entry, it is conjugate linear in its
first entry as
< a1 1 + a2 2 , > = < , a1 1 + a2 2 >

(3.15)

= a1 < , 1 > + a2 < , 2 >


= a1 < 1 , > +a2 < 2 , >
where we have used a1 to indicate the complex-conjugate of a1 . The physical states in a
system are described by normalised vectors in the Hilbert space, i.e. those H such
that < , >= 1.
Observables are represented by Hermitian operators in H. Hermitian operators are
self-adjoint.
Definition An operator A is the adjoint operator of A if
>.
< A , >=< , A

(3.16)

From the definition it is rapidly observed that


A = A
= A + B

(A + B)
= K A
(K A)
=B
A
(AB)
If A1 exists then (A1 ) = (A )1 .
A self-adjoint operator satisfies A = A. The prototype for the adjoint is the Hermitian
conjugate of a matrix M (M T ) .
Example 1:Cn as a Hilbert Space
In a sense a Hilbert space is a generalization to infinite dimensions of simple Cn (if we
ignore lots of subtle mathematical details). The natural inner product is
< x, y > x y.

(3.17)

Let A denote a self-adjoint matrix and we will show that A = A :


>= x Ay
= (A x) y =< A x, y > .
< x, Ay

(3.18)

3.1. CANONICAL QUANTISATION

45

Example 2: L2 as a Hilbert Space


Let H = L2 (R) i.e. H < , >< and the inner product is
Z
dq (q)(q).
< , >

(3.19)

Using this inner product the momentum operator is a self-adjoint operator as




Z

dq (q) i~ (q)
< , p > =
q


ZR

dq i~
=
(q) (q)
q
R


Z

dq i~ (q) (q)
=
q
R

(3.20)

=< p , >
N.B. we have assumed that 0 and 0 at q = such that the boundary term
from the integration by parts vanishes.

3.1.2

Eigenvectors and Eigenvalues

In this section we will prove some simple properties of eigenvalues of self-adjoint operators.
Let u H be an eigenvector for the operator A with eigenvalue C such that
= u.
Au

(3.21)

The eigenvalues of a self-adjoint operator are real:


>=< u, u >= < u, u >
< u, Au

(3.22)

u >=< u, u >= < u, u >


=< Au,
hence = and R.
Eignevectors which have different eigenvalues for a self-adjoint operator are orthogonal. Let
= u
0 = 0 u0
Au
and
Au
(3.23)
where A is a self-adjoint operator and so , 0 R. Then we have
0 >=< u, 0 u0 >= 0 < u, u0 >
< u, Au
0

u >=< u, u >= < u, u >


=< Au,

(3.24)
(3.25)

Therefore,
(0 ) < u, u0 >= 0

< u, u0 >= 0

if 6= 0 .

(3.26)

Theorem 3.1.1. For every self-adjoint operator there exists a complete set of eigenvectors (i.e. a basis of the Hilbert space H).
The basis may be countable1 or continuous.
1

Countable means it can be put in one=to-one correspondence with the natural numbers.

46

CHAPTER 3. QUANTUM MECHANICS

3.1.3

A Countable Basis.

i.e.
Let {un } denote the eigenvectors of a self-adjoint operator A,
n = n un .
Au

(3.27)

By the theorem above {un } form a basis of H, let us suppose that it is a countable basis.
Let {un } be an orthonormal set such that
< un , um >= nm .

(3.28)

Any state may be written as a linear superposition of eigenvectors


=

n un

(3.29)

so that
< um , >=< um ,

n un >= m .

(3.30)

Let us now adopt the useful bra-ket notation of Dirac where the inner product is denoted
by
< un , > hun |i

(3.31)

so that, for example in Cn , vectors are denoted by kets e.g.


un |un i

and

|i

(3.32)

h|.

(3.33)

while adjoint vectors become bras:


un hun |

and

One advantage of this notation is that, being based around the Hilbert space inner
product, it is universal for all explicit realisations of the Hilbert space. However its
main advantage is how simple it is to use.
Using equation (3.30) we can rewrite equation (3.29) in the bra-ket notation as
|i =

hun |i|un i =

|un ihun |i

(3.34)

|un ihun | = IH

where IH is known as the completenes operator. It is worth comparing with Rn where


P
the identity matrix can be written n en eTn = I where en are the usual orthonormal
basis vectors for Rn with zeroes in all compenents except the nth which is one.
Using the properties of the Hilbert space inner product we observe that
= hun |i = h|un i

(3.35)

and further note that this is consistent with the insertion of the completeness operator
between two states
X
X
h|i =
h|un ihun |i =
n n .
(3.36)
n

3.1. CANONICAL QUANTISATION

47

between two states:


We may insert a general operator B
X
X
>= h|B|i

m ihum |i =
< , B
=
h|un ihun |B|u
n B m n m
n,m

where B m n are the matrix components of


example as un are eigenvectors of A with
Am n are

1 0 . . . 0

0 2 . . . 0
A =
.. . .
..
. 0
.
.
0

(3.37)

n,m

written in the un basis. For


the operator B
eigenvalues n then the matrix components

i.e.

Am n = n nm .

(3.38)

. . . n

one can
Theorem 3.1.2. Given any two commuting self-adjoint operators A and B
are simultaneously diagonalisable.
find a basis un such that A and B
Proof. As A is self-adjoint one can find a basis un such that
n = n un .
Au

(3.39)

Now
n=B
Au
n = n Bu
n
ABu
B]
= 0 and hence Bu
n is in the eigenspace of A (i.e. Bu
n=
as [A,
eigenvalue n hence
n = n un .
Bu

(3.40)
P

m m um )

and has
(3.41)

Example: Position operators in R3 .


Let (
x, y, z) be the position operators of a particle moving in R3 then
[
x, y] = 0,

[
x, z] = 0

and [
y , z] = 0

(3.42)

using the canonical quantum commutation rules and hence are simultaneously diagonalisable. One can say the same for px , py and pz .
The Probabilistic Interpretation in a Countable Basis.
The Born rule gives the probability that a measurement of a quantum system will yield
a particular result. It was first evoked by Max Born in 1926 and it was principally for
this work that in 1954 he was awarded the Nobel prize. It states that if an observable
associated with a self-adjoint operator A then the measured result will be one of the
Further it states that the probability that the measurement of |i
eigenvalues n of A.
will be n is given by
h|Pn |i
P (, un )
(3.43)
h|i
where Pn is a projection onto the eigenspace spanned by the normalised eigenvector un
i.e. Pn = |un ihun | giving
of A,
P (, un )

h|un ihun |i
|h|un i|2
=
.
h|i
h|i

(3.44)

48

CHAPTER 3. QUANTUM MECHANICS

Note that if the state was an eigenstate of A (i.e. = n un ) then P (, un ) = 1.


Following a measurement of a state the wavefunction collapses to the eigenstate that
was measured. Given the probability of measuring a system in a particular eigenstate
one can evaluate the expected value when measuring an observable. The expected
value is a weighted average of the measurements (eigenvalues) where the weighting is
in proportion to the probability of observing each eigenvalue. That is we may measure
the observable associated with the operator A of a state and find that n occurs with
probability P (, un ) then the expected value for measuring A is
=
hAi

n P (, un )

(3.45)

n i = n |un i we have that the expectation value of a measurement


Now given that A|u
of the observable associated to A is
=
hAi

X
n

m ihum |i

|h|un i|2 X h|un ihun |A|u


h|A|i
=
=
h|i
h|i
h|i
n,m

(3.46)

= h|A|i.

where we have used hun |um i = nm . If is a normalised state then hAi


The next most reasonable question we should ask ourselves at this point is what is the
which does not share
probability of measuring the observable of a self-adjoint operator B
i.e. what does the Born rule say about measuring observables
the eigenvectors of A,
of operators which do not commute? The answer will lead to Heisenbergs uncertainty
principle, which we relegate to a (rather long) problem.
Problem 3.1.1. The expectation (or average) value of a self-adjoint operator A acting
on a normalised state |i is defined by
h|A|i.

Aavg = hAi

(3.47)

The uncertainty in the measurement of A on the state |i is the average value of its
deviation from the mean and is defined by
q
q
(3.48)
A h(A Aavg )2 i = h|(A AavgI)2 |i
where I is the completeness operator.

(a.) Show that for any two self-adjoint operators A and B


2

2 |i.
|h|AB|i|
h|A2 |ih|B

(3.49)

Hint: Use the Schwarz inequality: | < x, y > |2 < x, x >< y, y > where x, y are
vectors in a space with inner product <, >.
are
(b.) Show that hAB + BAi is real and hAB BAi is imaginary when A and B
self-adjoint operators.
(c.) Prove the triangle inequality for two complex numbers z1 and z2 :
|z1 + z2 |2 (|z1 | + |z2 |)2 .

(3.50)

3.1. CANONICAL QUANTISATION

49

(d.) Use the triangle inequality and the inequality from part (a.) to show that
2
B]|i|

2 |i.
|h|[A,
4h|A2 |ih|B

(3.51)

0 B
I where , R. Show that A0
(e.) Define the operators A0 A I and B
0 ] = [A, B].
0 are self-adjoint and that [A0 , B
and B
(f.) Use the results to show the uncertainty relation:
1
B]|i|

(A)(B) |h|[A,
2

(3.52)

= p?
What does this give when A = q and B

3.1.4

A Continuous Basis.

If an operator A has eigenstates u where the eigenvalue is a continuous variable then


an arbitrary state in the Hilbert space is
Z
|i d |u i.
(3.53)
Then

Z
hu |i =

dhu |u i = .

(3.54)

The mathematical object that satisfies the above statement is the Dirac delta function:
hu |u i ( ).

(3.55)

Formally the Dirac delta function is a distributon or measure that is equal to zero
everywhere apart from 0 when (0) = . Its defining property is that its integral over
R is one. One may regard it as the limit of a sequence of Gaussian functions of width a
having a maximum at the origin, i.e.
x2
1
a (x) exp ( 2 )
a
a

(3.56)

so that as a 0 the limit of the Gaussians is the Dirac delta function as


Z
Z
1
x2
1
exp ( 2 )dx = ( ) a = 1
a (x)dx =
a
a

(3.57)

which is unchanged when we take the limit a 0 and so in the limit has the properties
of the Dirac delta function. We recall that the Gaussian integral
Z
x2
I
dx exp ( 2 )
(3.58)
a

gives
I2

dxdy exp (

x2 + y 2
)=
a2

r2
)
a2
0
0

Z 2 
a2
r2
=
d
exp ( 2 )
2
a 0
0
Z 2
2
a
=
d
2
0
Z

= a2

rdrd exp (

(3.59)
(3.60)
(3.61)
(3.62)

50

CHAPTER 3. QUANTUM MECHANICS

hence

I = a .

(3.63)

As a consequence the eigenstate |u i on its own is not correctly normalised to be a


vector in the Hilbert space as
hu |u i = ( ) hu |u i =

(3.64)

however used within an integral it is a normalised eigenvector for A in the Hilbert space:
Z
d hu |u i = 1.

(3.65)

We can show that the continuous eigenvectors form a complete basis for the Hilbert
space as
Z Z
h|i =
d d hu | |u i
(3.66)
Z Z
=
d d hu |h|u ihu |i|u i
Z Z
=
d d hu |u ih|u ihu |i
Z Z
=
d d ( )h|u ihu |i
Z
= dh|u ihu |i
hence we find the completeness relation for a continuous basis:
Z
d|u ihu | = IH

(3.67)

The Probabilistic Interpretation in a Continuous Basis.


The formulation of Borns rule is only slightly changed in a continuous basis. It now is
stated as the probability of finding a system described by a state |i to lie in the range
of eigenstates between |u i and |u+ i is
Z +
Z +
h|u ihu |i
| |2
P (, u ) =
d
=
d
(3.68)
h|i
h|i

Transformations between Different Bases


We finish this section by demonstrating how a state |i H may be expressed using
different bases for H by using the completeness relation. In particular we show how one
may relate a discrete basis of eigenstates to a continuous basis of eigenstates.
Let {|un i} be a countable basis for H and let {|v i} be a continuous basis, then:
hun |i = n

and

hv |i = .

(3.69)

Hence we may expand each expression using the completeness operator for the alternative basis to find:
= hv |i
X
=
hv |un ihun |i
n

X
n

un ()n

(3.70)


3.2. THE SCHRODINGER
EQUATION.

51

where un () hv |un i, and similarly,


n = hun |i
Z
= d hun |v ihv |i
Z
= d un () .

3.2

(3.71)

The Schr
odinger Equation.

Schrodinger developed a wave equation for quantum mechanics by building upon de


Broglies wave-particle duality. Just as the (dynamical) time-evolution of a system
represented in phase space is given by Hamiltons equations, so the time evolution of a
quantum system is described by Schrodingers equation:
i~

= H
t

(3.72)

A typical Hamiltonian in position space has the form


2

= ~
H
2

n
n
X
X
1 2
+
Vi (q)
mi qi2
i=1

(3.73)

i=1

where V (q) = V (q1 , q2 , . . . qn ) and is Hermitian2 . We will make use of the Hamiltonian
in this form in the following.
Theorem 3.2.1. The inner product on the Hilbert space is time-indpendent.
given
Proof. We will prove this for the L2 norm and use the form of the Hamiltonian H
above. As
Z
h|i =
dk q q q
(3.74)
Rk

we have

q
q
q + q
t
t
k


ZR
i
i
k
=
d q
(H q )q q (Hq )
~
~
Rk

h|i =
t

dk q

(3.75)


where we have used Schr
odingers equation and its complex conjugate: i~
t = H .

This guarantees that the energy eigenstates have real eigenvalues and form a basis of the Hilbert
space. We will only consider Hermitian Hamiltonians in this course. However while it is conventional to
consider only Hermitian Hamiltonians it is by no means a logical consequence of canonical quantisation
and one should be aware that non-Hermitian Hamiltonians are discussed occasionally at research level
see for example the recent work of Professor Carl Bender.

52

CHAPTER 3. QUANTUM MECHANICS

is Hermitian we have H
= H
and so,
As H

Z
n
n

i ~2 X 1 2 q X
k
h|i =
d q
(
+
Vi (q)q )q
t
~
2
mi qi2
Rk
i=1
i=1

n
n
2
2
X
1 q X
i ~
+
Vi (q)q )
q (
~
2
mi qi2
i=1
i=1


Z
n
2
X
1 2 q
i~
k
q
d q
q q
=
2 Rk
mi qi2
qi2
i=1


Z
n
X
q q
q q
i~
1
k
=
d q

+
2 Rk
mi
qi qi
qi qi
i=1
X


n
i~
1 q
q
q q

2
mi qi
qi
Rk
i=1
 n


q
i~ X 1 q
=
q q
2
mi qi
qi
Rk

(3.76)

i=1

=0
if the boundary term vanishes: typically well-behaved wavefunctions which have compact
support and will vanish at . So to complete the proof we have assumed that both
the wavefunctions go to zero while their first-derivatives remain finite at infinity.
From the calculation above we see that the probability density (N.B. just
the integrand above) for a wavefuntion , which was used to normalise the probability
expressed by Borns rule, is conserved, up to a probability current J i corresponding to
the boundary term above:



n
n
X

i~ X 1 q
J i
q
=

q q

(3.77)
t
qi
2
mi qi
qi
qi
i=1

i=1

where J i is called the probability current and is defined by




i~ q
i
q
J
q q
.
2mi qi
qi

(3.78)

Consequently we arrive at the continuity equation for quantum mechanics

+J=0
t

(3.79)

where J is the vector whose components are J i .


While the setting was different, we note the similarity in the construction of the
equations to the derivation of a conserved charge in Noethers theorem as presented
above.

3.2.1

The Heisenberg and Schr


odinger Pictures.

Initially the two formulations of quantum mechanics were not understood to be identical.
The matrix mechanics of Heisenberg was widely thought to be mathematically abstract
while the formulation of a wave equation by Schrodinger although it appeared later was
much more quickly accepted as the community of physicists were much more familiar


3.2. THE SCHRODINGER
EQUATION.

53

with wave equations than non-commuting matrix variables. However both formulations
were shown to be identical. Here we will discuss the two pictures and show the
transformations which transform them into each other.
The Schr
odinger Picture
In the Schr
odinger picture the states are time-dependent = (q, t) but the operators

dA
are not dt = 0. One can find the time-evolution of the states from the Schrodinger
equation:

i~ |(t)iS = H|(t)i
(3.80)
S
t
which has a formal solution


iHt
iHt
|(t)iS = e ~ |(t)iS
(3.81)
= e ~ |(0)iS
t=0

Using the energy eigenvectors (the eigenvectors of the Hamiltonian) as a countable basis
for the Hilber space we have
|(t)iS =

|En ihEn |(0)iS e

iEt
~

(3.82)

i.e. we have taken E to be the eigenvalue for the Hamiltonian of (0)S : H|(0)i
S =
P 0
iEt
~ |(0)iS .
n n En |En i E|(0)iS so that (t) = e
The Heisenberg Picture
In the Heisenberg picture the states are time-independent but the operators are timedependent:
|iH = e

iHt
~

|(t)iS = |(0)iS

(3.83)

while

iHt
iHt
AH (t) = e ~ AS e ~ .

(3.84)

Note that the dynamics in the Heisenberg picture is described by

iH
iH
i

AH (t) =
AH (t) AH (t)
= [H,
AH (t)]
t
~
~
~
and we note the parallel with the statement from Hamiltonian mechanics that
{f, H} for a function f (q, p) on phase space.

(3.85)
df
dt

Theorem 3.2.2. The picture changing transformations leave the inner product invariant.
Proof.
H h|iH

=S h|e

iHt
~

iHt
~

|iS =S h|iS

(3.86)

Theorem 3.2.3. The operator matrix elements are also invariant under teh picturechanging transformations.

54

CHAPTER 3. QUANTUM MECHANICS

Proof.

H h|AH (t)|iH

=S h|e

iHt
~

iHt
AH (t)e ~ |iS

=S h|e

iHt
~

iHt
~

(3.87)

iHt iHt
AS e ~ e ~ |iS

=S h|AS |iS

Example The Quantum Harmonic Oscillator. The Lagrangian for the harmonic oscillator is
1
1
L = mq2 kq 2
(3.88)
2
2
The equation of motion is
k
q = q
(3.89)
m
whose solution is
q = A cos (t) + B sin (t)
(3.90)
q
k
. The Legendre transform give the Hamiltonian:
where = m
H=

k
1
p2
p2
+ q 2 = m 2 q 2 +
.
2m 2
2
2m

(3.91)

The canoonical quantisation procedure gives the quantum hamiltonian for the harmonic
oscillator:
2
= 1 m 2 q2 + p .
H
(3.92)
2
2m
Let us first deal with this by directly trying to solve the Schrodinger equation.
Following the quantization prescription above the Schrodinger equation is
i~

~2 2 1 2
=
+ kq .
t
2m q 2
2

(3.93)

First we look for energy eigenstates:

~2 2 n 1 2
+ kq n = En n ,
2m q 2
2

(3.94)

so that the general solution is


(t) =

eiEn t/~ n .

(3.95)

To continue we write (q)n = f (q)eq


function. We find

2 b2

where b is a constant and f an unknown


2 n
2 2
= f 00 4f 0 b2 qf 0 2b2 f + 4f b4 q 2 f eq b
2
q

(3.96)

and hence

 1
~2
(3.97)
f 00 4b2 qf 0 2b2 f + 4f b4 q 2 f + kq 2 f = En f .
2m
2
So far f was arbitrary so we can choose b4 = km/4~2 so that the terms involving q 2 f
are cancelled. This in turn means that a constant f = C0 provides one solution:

0 = C0 ekmq

2 /2~

E0 =

~2 b2
1
= ~
m
2

(3.98)


3.2. THE SCHRODINGER
EQUATION.
We can fix C0 be demanding that
Z
1=

55

dq|0 (q)|2

Z
2
2
dqekmq /~
= |C0 |

= |C0 |

= |C0 |2

~
km
~
km

1 Z
2

dxex

1

(3.99)

Thus we can take C0 = (km/~)1/4 .


To find other solutions we note that the general equation for f is
f 00 4b2 qf 0 2b2 f =

2m
En f .
~2

(3.100)

It is not hard to convince yourself that polynomials of degree n in q will solve this
equation. One can then work out the En for low values of n. And although 0 is indeed
the ground state this is not obvious.
However there is a famous and very important algebraic way to solve the harmonic
oscilator. Let us make an inspired change of variables and reqrite the Hamiltonian in
terms of
r


i
m
q +
p
(3.101)

=
2~
m
r


m
i

=
q
p
2~
m
so that
r
q =



~

2m

r
p = i

and



~m

.
2

(3.102)

Therefore,






1
1 ~m
2 ~

H = m

2
2m
2m 2


~
=

+
+

+

+
+


4


~

+

2

(3.103)

Problem 3.2.1. Show that [


,
] = 1.
Using [
,
] = 1 we find that



1

= ~
H
+
.
2

(3.104)

The Hilbert space of states may be constructed as follows. Let |ni be an orthonormal
is diagonalised - i.e. these are the energy eignestates:
basis such H

H|ni
En |ni.

(3.105)

56

CHAPTER 3. QUANTUM MECHANICS

Now we note that


1
1

[H,
] = ~
+ ~

~
~

2
2
= ~
[,
]

(3.106)

= ~

and, similarly,

[H,
] = ~
.

(3.107)

Consequently we may deduce that alpha raises the eignevalue of the energy eigenstate,
while
lowers the energy eigenstates:

+ ~
H
|ni = (
H
)|ni = (En + ~)
|ni

(3.108)

~
H
|ni = (
H
)|ni = (En ~)
|ni
consequently is called the creation operator while
is called the annihilation operator.

Together and
are sometimes called the ladder operators.
It would appear that given a single eigenstate the ladder operators create an infinite
set of eigenstates, however due to the postive definitieness of the Hilbert space inner
product we see that the infinite tower of states must terminate at some point. Consider
the length squared of the state |ni:


En 1
1 1

(3.109)
0 hn|

|ni = hn| H |ni =


~
2
~ 2
hence En 12 ~. However the energy eigenvalues of the states
k |ni are
k |ni = (En k~)
H
k |ni

(3.110)

where k Z and k > 0. We see that the eigenvalues of the states are continually
reduced, but we know that a minimum energy exists ( 12 ~) beyond which the eigenstates
will have negative length squared. Consequently we conclude there must exist a ground
state eigenfunction |0i such that |0i = 0. In fact if
|0i = 0 then
h0|
|0i = 0

1
E0 = ~.
2

(3.111)

Finally we comment on the normalisation of the energy eigenstates. Our aim is to find
the normalising constant where
|n 1i =
|ni.

(3.112)

Then as both |n 1i and |ni are normalised we have:


1 = hn 1|n 1i = ||2 hn|

|ni = ||2 nhn|ni = ||2 n

(3.113)

where we have used the observation that

is the number operator.


Problem 3.2.2. Let the state |ni be interpreted as an n-particle eigenstate with energy

satisfies:
En = 12 ~ + n~. Show that the number operator N
|ni = n
hn|N

(3.114)


3.2. THE SCHRODINGER
EQUATION.
Hence =

1
n

and
|ni =

57

n|n 1i.

Problem 3.2.3. Show that


|ni =

n + 1|n + 1i.

Thus we see that the spectrum of the harmonic oscilator is




1
,
En = ~ n +
2

(3.115)

with n = 0, 1, 2, 3.... So indeed 0 found above is the ground state. We could have easily
found it from this discussion as
|0i = 0 becomes the differential equation


i
~ 0
0 = q +
p 0 = q0 +
.
(3.116)
m
m q
Integrating this immediately gives the 0 (q) that we found above. Furthermore the
higher eigenstates can be found by acting with powers of
:
r


1
1
m
~ n

n+1 =

n =
qn
.
(3.117)
m q
n+1
n + 1 2~
These will be normalized and will clearly take the form of a polynomial of degree n
times 0 .
Compare this spectrum to the classical answer we had before:
1
E = k(A2 + B 2 )
2

(3.118)

This depends on the amplitude of the wave and k (not ) and takes any non-negative
value. Whereas in the quantum theory there is a non-zero ground state energy 12 ~ with
a discrete spacing above that. The ground state energy can in fact be measured in what
is known as the Casimir effect. It also plays an important role in string theory leading
to the need to have 10 (or 26) dimensions.

58

CHAPTER 3. QUANTUM MECHANICS

Chapter 4

Group Theory
The first investigations of groups are credited to the famously dead-at-twenty Evariste
Galois, who was killed in a duel in 1832. Groups were first used to map solutions of
polynomial equations into each other. For example the quadratic equation
y = ax2 + bx + c

(4.1)

p
1
(b b2 4ac).
2a

(4.2)

is solved when y = 0 by
x=

It has two solutions () which may be mapped into each other by a Z2 reflection which
swaps the + solution for the solution. The Z2 is the cyclic group of order two
(which is sometimes denoted C2 and similarly there exist groups which map the roots of
a more general polynomial equation into each other. Groups have a geometrical meaning
too. The symmetries which leave unchanged the n-polygons under rotation are also the
cyclic groups, Zn (or Cn ). For example Z3 rotates an equilateral triangle into itself using
4
6
rotations of 2
3 , 3 and 3 = 2 about the centre of the triangle and Z4 is the group of
rotations of the square onto itself.
The cyclic groups are examples of discrete symmetry groups. The action of the
discrete group takes a system (e.g. the square in R2 ) and rotates it onto itself without
passing through any of the suspected intervening orientations. The Z4 group includes
the rotation by 2 but it does not include any of the rotations through angles less than

2 and greater than 0. One may imagine that under the action of Z4 the square jumps
between orientations:
D

(4.3)

On the other hand continuous groups (such as the rotation group in R2 move the
square continuously about the centre of rotation. The rotation is parameterised by a
continuous angle variable, often denoted . The Norwegian Sophus Lie began the study
of continuous groups, also known as Lie groups, in the second half of the 19th century.
Rather than thinking about geometry Sophus Lie was interested in whether there were
some groups equivalent to Galois groups which mapped solutions of differential equations
59

60

CHAPTER 4. GROUP THEORY

into each other1 . Such groups were identified, classified and named Lie groups. The
rotation group SO(n) is a Lie group.
In the wider context groups may act on more than algebraic equations or geometric
shapes in the plane and the action of the group may be encoded in different ways. The
study of the ways groups may be represented is aptly named representation theory.
It is believed and successfully tested (at the present energies of expereiments) that
the constiuent objects in the universe are invariant under certain symmetries. The
standard model of particle physics holds that all known particles are representations
of SU (3) SU (2) U (1). More simply, Einsteins special theory of relativity may be
studied as the theory of Lornetz groups.
We will make contact with most of these topics in this chapter and we begin with
the preliminaries of group theory: just what is a group?

4.1

The Basics

Definition A group G is a set of elements {g1 , g2 , g3 . . .} with a composition law ()


which maps G G G by (g1 , g2 ) g1 g2 such that:
(i) g1 (g2 g3 ) = (g1 g2 ) g3

g1 , g2 , g3 G

(ii) e G such that e g = g e = g

gG

(iii) g 1 G such that g g 1 = g 1 g = e

ASSOCIATIVE
IDENTITY

gG

INVERSES

Consequently the most trivial group consists of just the identity element e. Within the
definition above, together with the associative proprty of the group multiplication, the
existence of an identity element and an inverse element g 1 for each g, there is what
we might call the zeroth property of a group. namely the closure of the group (that
g1 g2 G.
Let us now define some of the most fundamental ideas in group theory.
Definition A group G is called commutative or ableian if g1 g2 = g2 g1 g1 , g2 G.
Definition The centre Z(G) of a group is:
Z(G) {g1 G | g1 g2 = g2 g1 g2 G}

(4.4)

The centre of a group is the subset of elements in the group which commute with all
other elements in G. Trivially e G as e g = g e g G.
Definition The order |G| of a group G is the number of elements in the set {g1 , g2 , . . .}.
For example the order of the group Z2 is |Z2 | = 2, we have also seen |Z3 | = 3, |Z4 | = 4
and in general |Zn | = n, where the elements are the rotations m2
n where m Z mod n.
Definition For each g G the conjugacy class Cg is the subset
Cg {h g h1 | h G} G.
1

(4.5)

Very loosely, as each solution to a differential equation is correct up to a constant, the solutions
contain a continuous parameter: the constant.

4.2. COMMON GROUPS

61

Exercise Show that the identity element of a group G is unique.


Solution Suppose e and f are two distinct identity elements in G. Then eg = f g
e (g g 1 ) = f (g g 1 ) e = f . Contrary to the supposition.

4.2

Common Groups

A list of groups is shown in table 4.2.1, where the set and the group multiplication law
have been highlighted.
A few remarks are in order.
(1,6-10) are finite groups satisfying |G| < .
(14-20) are called the classical groups.
Groups can be represented by giving their multiplication table. For example consider Z3 :
e
g g2
e

g2

g2

g2

g2

Arbitrary combinations of group elements are sometimes called words.

4.2.1

The Symmetric Group Sn

The Symmetric group Sn is the group of permutations of n elements. For example S2


has order |S2 | = 2! and acts on the two elements ((1, 2), (2, 1)). The group action is
defined element by element and may be written as a two-row matrix with n columns,
where the permutation is defined per column with the label in row one being substituted
for the label in row two. For S2 consider the group element
!
1 2
g1
.
(4.6)
2 1
This acts on the elements as
g1 (1, 2) = (2, 1)

g1 (2, 1) = (1, 2)

(4.7)

g12 (1, 2) = (1, 2)

g12 (2, 1) = (2, 1)

(4.8)

hence g1 = g11 and g12 = e and S2 {e, g1 }. It is identical to Z2 .


More generally for the group Sn having n! elements it is denoted by a permutation
P such as:
!
1 2 3 ... n
P
(4.9)
p1 p2 p3 . . . pn
where p1 , p2 , p3 , . . . pn {1, 2, 3, . . . n}. The permutation P
(p1 , p2 , p3 , . . . , pn ). In general successive permutations do not
consider S3 and let
!
1 2 3
1 2
P
and
Q
2 3 1
1 3

takes (1, 2, 3, . . . , n) to
commute. For example

3
2

!
.

(4.10)

62

CHAPTER 4. GROUP THEORY

1
2
3
4
5
6
7
8

G = {e}
{F} where F = Z, Q, R, C

{F F\0} where F = Q, R, C
{F>0 } where F = Q, R
{0, n, 2n, 3n, . . .} nZ where n Z.
{0, 1, 2, 3, . . . , (n 1)}.
{1, 1}.
2 3
{e, g, g , g , . . . g n1 }.

Sn the symmetric group or


permutation group of n elements.
Dn the dihedral group.
The group of rotations and reflections
of an n-sided polygon with undirected edges.
Bijections f : X X where X is a set.
GL(V ) {f : V V | f is linear and invertible}.
V is a vector space.
A vector space, V .
GL(n, F) {M n n matrices | M is invertible.}
The general linear group, with matrix entries in F.
SL(n, F) {M GL(n, F) | det M = 1}
The special linear group.
O(n) {M GL(n, R) | M T M = In }
The orthogonal group.
SO(n) {M GL(n, R) | det M = 1}
The special orthogonal group.
U (n) {M GL(n, C) | M M = In }
The unitary group.
SU (n) {M U (n) | det M = 1}
The special unitary group.
Sp(2n) {M GL(2n, R) | M T !
JM = J}

10

11
12
13
14
15
16
17
18
19
20

Where J

21

0n
In

In
0n

Under multiplication.
Under addition.
Under multiplication.
An abelian group under multiplication.
An abelian group under addition.
Addition mod (n), e.g. a + b = c mod n.
Under multiplication.
With g k g l = g (k+l) mod n .
This is the cyclic group of order n, Zn .
Under the composition of permutations.
Under the composition of permutations.
Composition of transformations.
Composition of maps.
Composition of maps.
An abelian group under vector addition.
Matrix multiplication.
Matrix multiplication.
Matrix multiplication.
Matrix multiplication.
Matrix multiplication.
Matrix multiplication.
Matrix multiplication.

The symplectic group.


O(p, q) {M GL(p + q, R) | M T p,q!
M = p,q }

Matrix multiplication.

Ip
0pq
Where p,q
.
0pq Iq
!
a b
22 SL(2, Z) {
| a, b, c, d Z, ad bc = 1} Matrix multiplication.
c d
The modular group.

Table 4.2.1: A list of commonly occurring groups.

4.2. COMMON GROUPS

63

Then,
!

P Q=

1 2 3
1 3 2

QP =

1 2 3
2 3 1

while

1 2 3
2 3 1

1 2 3
1 3 2

1 2 3
3 2 1

1 2 3
2 1 3

(4.11)

(4.12)

Hence P Q 6= Q P and S3 is non-abelian. So it also follows that Sn is non-abelian


for all n > 2.
Alternatively one may denotes each permutation by its disjoint cycles of labels formed
by multiple actions of that permutation. For example consider P S3 as defined above.
Under successive actions of P we see that the label 1 is mapped as:
P

1 2 3 1.

(4.13)

We may denote this cycle as (1, 2, 3) and it defines P entirely. On the other hand Q, as
defined above, may be described by two disjoint cycles:
Q

1 1
Q

(4.14)
Q

2 3 2.

(4.15)

We may write Q as two disjoint cycles (1), (2, 3). In this notation S3 is written
{(), (1, 2), (1, 3), (2, 3), (1, 2, 3), (1, 3, 2)}

(4.16)

where () denotes the trivial identity permutation. S3 is identical to the dihedral group
D3 . The dihedral group Dn is sometimes defined as the symmetry group of rotations
of an n-sided polygon with undirected edges - this definition requires a bit of thought,
as the rotations may be about an axis through the plane of the polygon and so are
reflections. The dihedral group should be compared with cyclic groups Zn which are
the rotation symmetries of an n-polygon with directed edges, while Dn includes the
reflections in the plane as well. For example if we label the vertices of an equilateral
triangle by 1, 2 and 3 we could denote D3 as the following permutations of the vertices
!
!
!
1 2 3
1 2 3
1 2 3
{
,
,
,
(4.17)
1 2 3
2 1 3
3 2 1
!
!
!
1 2 3
1 2 3
1 2 3
,
,
}
1 3 2
3 1 2
3 1 2
= {(), (1, 2), (1, 3), (2, 3), (1, 2, 3), (1, 3, 2)}.
So we see that D3 is identical to S3 . We see that there are three reflections and three
rotations within D3 (the identity element is counted as a rotation for this purpose). In
general Dn contains the n rotations of Zn as well as reflections. For even n there is an
axis in which the reflection is a symmetry which passes through each pair of opposing
vertices ( n2 and also reflections in the line through the centre of each opposing edge n2 .
For odd n there are again n lines about which reflection is a symmetry, however these
lines now join a vertex to the middle of an opposing edge. In both even and odd cases
there are therefore n rotations and n reflections. Hence |Dn | = 2n.

64

CHAPTER 4. GROUP THEORY

We may wonder if all dihedral groups Dn are identical to the permutation groups
Sn . The answer is no, it was a coincidence that S3
= D3 . We can convince ourselves
of these by considering the order of Sn and Dn . As we have already observed |Sn | = n!
while |Dn | = 2n. For the groups to be identical we at least require their orders to match
and we note that we can only satisfy n! = 2n for n = 3.
Returning to the symmetric group we will mention a third important notation for
permutations which is used to define symmetric and anti-symmetric tensors. Each permutation P can be written as combinations of elements called transpositions ij which
swap elements i and j but leave the remainder untouched. Consequently each transposition may be written as a 2-cycle i,j = (i, j). For example,
!
1 2 3
P
= 1,3 2,3 .
(4.18)
2 3 1
If there are N transpositions required to replicate a permutation P Sn then the sign
of the permuation is defined by
Sign(P ) (1)N .

(4.19)

You should convince yourself that this operation is well-defined and that each permutation P has a unique value of Sign(P ) - this is not obvious as there are many different
combinations of the transpositions which give the same overall permutation. The canonical way to decompose permutations into transpositions is to consider only transpositions
which interchange consecutive labels, e.g 1,2 , 2,3 , . . . n1,n . A general r-cycle may be
decomposed (not in the canonical way) into r 1 transpositions:
(n1 , n2 , n3 , . . . nr ) = (n1 , n2 )(n2 , n3 ) . . . (nr1 , nr ) = n1 ,n2 n2 ,n3 . . . nr1 ,nr .

(4.20)

Consequently an r-cycle corresponds to a permutation R such that Sign(R) = (1)(r1) .


Therefore the elements of S3
= D3 may be partitioned into those elements of sign 1
(), (1, 2, 3), (1, 3, 2), which geometrically correspond to the rotations of the equilateral
triangle in the plane, and those of sign -1 (1, 2), (2, 3), (1, 3) which are the reflections in
the plane. The subset of permutations P Sn which have Sign(P )=1 form a sub-group
of Sn which is called the alternating group and denoted An .
We finish our discussion of the symmetric group by mentioning Cayleys theorem. It
states that every finite group of order n can be considered as a subgroup of Sn . Since
Sn contains all possible permutations of n labels it is not a surprising theorem.
Problem 4.2.1. Dn is the dihedral group the set of rotation symmetries of an n-polygon
with undirected edges.
(i.) Write down the multiplication table for D3 defined on the elements {e,a,b} by a2 =
b3 = (ab)2 = e. Give a geometrical interpretation in terms of the transformations
of an equilateral triangle for a and b.
(ii.) Rewrite the group multiplication table of D3 in terms of six disjoint cycles given
by repeated action of the basis elements on the identity until they return to the
identity, e.g. e e under the action of e, e a e under the action of a.

4.2. COMMON GROUPS

65

(iii.) Label the vertices of the equilateral triangle by (1, 2, 3). Denote the vertices of the
triangle by (1, 2, 3) and give permutations of {1, 2, 3} for e, a and b which match
the defining relations of D3 .
(iv.) Rewrite each of the cycles of part (b.) in cyclic notation on the vertices (1, 2, 3) to
show this gives all the permutations of S3 .

4.2.2

Back to Basics

Definition A subgroup H of a group G is a subset of G such that e H, if g1 , g2 H


then g1 g2 H and if g H g 1 H where g, g1 , g2 , g 1 G.
The identity element {e} and G itself are called the trivial subgroups of G. If a subgroup
H is not one of these two trivial cases then it is called a proper subgroup and this is
denoted H < G. For example S2 < S3 as:
S2 = {(), (1, 2)}

and

(4.21)

S3 = {(), (1, 2), (1, 3), (2, 3), (1, 2, 3), (1, 3, 2)}.
Definition Let H < G. The subsets g H {g h G | h H} are called left-cosets
while the subsets H g {h g G | h H} are called right-cosets.
A more formal way to define a left coset is to consider and equivalence relation
g1 g2 iff g11 g2 H. Equivalence relations satisfy three properties
gg
if g1 g2 then g2 g1
if g1 g2 and g2 g3 then g1 g3
It is easy to check these for our case. The left coset g H is then defined as H/ .
Similarly a right coset is defined by the equivalence relation g1 g2 iff g1 g21 H.
The left-coset g H where g G contains the elements
{g h1 , g h2 , . . . , g hr }

(4.22)

where r |H| and {h1 , h2 , . . . , hr } are the distinct elements of H. One might suppose
that r < |H| which could occur if two or more elements of g H were identical, but if
that were the case we would have
g h1 = g h2

h1 = h2

(4.23)

but h1 and h2 are defined to be distinct. Hence all cosets of G have the same number
of elements which is |H|, the order of H.
Consequently any two cosets are either disjoint or coincide. For example, consider
the two left-cosets g1 H and g2 H and suppose that there existed some element g
in the intersection of both cosets, i.e. g g1 H g2 H. In this case we would have
g = g1 h1 = g2 h2 for some h1 , h2 H. Then,
1
g1 H = (g h1
1 ) H = g H = g (h2 H) = g2 H.

(4.24)

66

CHAPTER 4. GROUP THEORY

Hence either the cosets are disjoint or if they do have a non-zero intersection they are
in fact coincident. This means that the cosets provide a disjoint partition of G

g 1H
g 2H

gn H

g 3H

(4.25)

hence
|G| = n|H|

(4.26)

for some n Z. This statement is known as Lagranges theorem which states that the
order of any subgroup of G must be a divisor of |G|.
A corollary of Lagranges theorem is that groups of prime order have no proper
subgroups (e.g. Zn where n is prime).
Definition H < G is called a normal subgroup of G if
gH =H g

(4.27)

g G. This is denoted H C G.
The definition of a normal subgroup is equivalent to saying that g H g 1 = H.
Definition G is called a simple group is it has no non-trivial normal subgroups (i.e.
besides {e} and G itself).
Theorem 4.2.1. If H C G then the set of cosets
law
(g1 H) (g2 H) = (g1 g2 ) H

G
H

is itself a group with composition


g1 , g2 G.

(4.28)

G
.
This group is called the quotient group, or factor group, and denoted H
Note that the normal condition is needed to ensure that this product is well defined,
i.e. independent of the choice of coset representative. To see this suppose that we
choose g1 G and g2 G as the coset representatives so that the coset representative
of (g1 g2 ) H is g1 g2 . But we could also have chosen g10 = g1 h1 and g20 = g2 h2
(here we are talking about left cosets). In this case the coset representative of the
product is h1 g1 h2 g2 and we require that this is equivalent to g1 g2 . This means that
g21 g11 h1 g1 h2 g2 H. If H is normal then g21 g11 h1 g1 g2 = h00 H and g21 h2 g2 = h000
H so that g21 g11 h1 g1 h2 g2 = h00 g21 h2 g2 = h00 h000 H.

Proof. Evidently it is closed as the group action takes g H g H g H. Let us


check the three axioms that define a group.

4.2. COMMON GROUPS

67

(i.) Associativity:
(g1 H) ((g2 H) (g3 H)) = (g1 H) (g2 g3 ) H

(4.29)

= (g1 (g2 g3 )) H
= ((g1 g2 ) g3 ) H
= ((g1 g2 ) H) (g3 H)
= ((g1 H) (g2 H)) (g3 H)
(ii.) Identity. The coset e H acts as the identity element:
(e H) (g H) = (e g) H = g H
(g H) (e H) = (g e) H = g H

(4.30)

(iii.) Inverse. The inverse of the coset g H is the coset g 1 H as:


(g H) (g 1 H) = e H = H

(4.31)

N.B. that the group composition law arises as H C G so g1 H g2 H = g1 g2 H.


Let us give a simple example: modular arithmetic. We start with Z as an additive
group. Let fix an integer p and let H = pZ = {kp|k Z}. It is easy to see that pZ
is a subgroup of Z with the standard definition of addition. Since Z is abelian pZ is a
normal subgroup. Thus the coset Z/pZ is a group. In particular the cosets are
n H = {n + kp|k Z}

(4.32)

There are p disjoint choices:


0H ,

1H ,

2H ,

...

(p 1) H .

(4.33)

since p H = 0 H, (p + 1) H = 1 H etc.. The group product is just addition modulo


p:
(n1 H) (n2 H) = (n1 + n2 ) H = {n1 + n2 + kp|k Z}
= ((n1 + n2 ) mod p) H .
Let us look at another example where the subgroup
S3 which has elements
!
!
1 2 3
1 2 3
S3 = {
,
,
1 2 3
2 1 3
!
!
1 2 3
1 2 3
,
,
1 3 2
3 1 2

(4.34)

H is not normal. Se consider

1 2 3
3 2 1

1 2 3
3 1 2

(4.35)

Let us take the subgroup H to be


H={

1 2 3
1 2 3

!
,

1 2 3
2 1 3

!
}.

(4.36)

68

CHAPTER 4. GROUP THEORY

This is clear a subgroup since it simply consists of two elements e and g with g 2 = e. In
fact H = S2 since it is just permuting the first two elements. One can explicitly check
that
!
!
1 2 3
1 2 3
H =H
=H
(4.37)
1 2 3
1 2 3
as expected. And also that
1 2 3
2 1 3

!
H =H

1 2 3
2 1 3

!
=H

(4.38)

as expected. But lets look at a non-trivial coset:


!
!
!
!
1 2 3
1 2 3
1 2 3
1 2 3
H ={
,
1 3 2
1 3 2
1 2 3
1 3 2
!
!
1 2 3
1 2 3
={
,
}
1 3 2
3 1 2

1 2 3
2 1 3

!
}

(4.39)
But the right coset is
!
1 2 3
={
H
1 3 2
={

1 2 3
1 2 3

1 2 3
1 3 2

1 2 3
1 3 2
,

1 2 3
2 3 1

!
,

1 2 3
2 1 3

1 2 3
1 3 2

!
}

!
}
(4.40)

and this is not the same as the left coset. So although S2 is a subgroup of S3 it is not a
normal subgroup.

4.3

Group Homomorphisms

Maps between groups are incredibly useful in recognising similar groups and constructing
new groups.
Definition A group homomorphism is a map f : G G0 between two groups (G, )
and (G0 , 0 ) such that
f (g1 g2 ) = f (g1 ) 0 f (g2 )

g1 , g2 G

(4.41)

Definition A group isomorphism is an invertible group homomorphism.


If an isomorphism exists between G and G0 we write G
= G0 and say that G is isomorphic
to G0 .
Definition A group automorphism is an isomorphism f : G G.
Problem 4.3.1. If f : G G0 is a group homomorphism between the groups G and
G0 , show that

4.3. GROUP HOMOMORPHISMS

69

(i.) f (e) = e0 , where e and e0 are the identity elements of G and G0 respectively, and
(ii.) f (g 1 ) = (f (g))1 .
Theorem 4.3.1. If f : G G0 is a group homomorphism then the kernel of f , defined
as Ker(f ) {g G|f (g) = e0 } is a normal subgroup of G.
Problem 4.3.2. Prove Theorem 4.3.1.
G
0
The theorem above can be used to prove that Ker(f
) = G for a given group homoG
0
morphism f : G G0 , or conversely given an isomorphism between Ker(f
) and G to
identify the group homomorphism f (see section 4.3.1). A corollary of the theorem
above is that simple groups, having no non-trivial normal subgroups, admit only trivial
homomorphisms, i.e. those for which Ker(f ) = G or Ker(f ) = {e}.

Comments
(nZ, +) are abelian groups and hence normal subgroups of Z: nZ C Z.
(F>0 , ) C (F , ).
Group 6 in table 4.2.1 ({0, 1, 2, 3, . . . , (n 1)}, + mod (n)) is isomorphic to group
8 ({e, g, g 2 , g 3 , . . . g n1 }, g k g l = g (k+l) mod n ), with the group isomorphism being
f (1) = g.
Dn < Sn and Dn is not a normal subgroup in general.
Sign(P Sn ) Z2 is a group homomorphism. Consequently the alternating
group An (P Sn , Sign(P ) = 1) is a normal subgroup of Sn as An Ker(Sign).
The determinant, Det is a group homomorphism: Det(GL(n, F)) (F , ).
Hence:
- SL(n, F) C GL(n, F) as SL(n, F) Ker(Det),
- SO(n) C O(n) and
- SU (n) C U (n).
And so

= (F , ),

GL(n,F)
SL(n,F)

O(n)
SO(n)

= Z2 and

U (n)
SU (n)

= U (1) {z C, |z| = 1}.

The centre of SU (2) denoted Z(SU (2)) = Z2 and one can show that the coset
group SUZ2(2)
= SO(3).
There are a number of simple ways to create new groups from known groups for example:
(1.) Given a group G, identify a subgroup H. If these are normal H C G then
group.

G
H

is a

70

CHAPTER 4. GROUP THEORY

(2.) Given two groups G and G0 , find a group homomorphism F : G G0 such that
G
0
Ker(f )CG then Ker(f
) = G and we observe as a corollary that Ker(f ) is a group.
(3.) One can form the direct product of groups to create more complicated groups.
The direct product of two groups G and H is denoted G H and has composition
law:
(g1 , h1 ) 0 (g2 , h2 ) (g1 G g2 , h1 H h2 )
(4.42)
where g1 , g2 G, h1 , h2 H, G is the composition law on G and H is the
composition law on H. E.g. the direct product R R has the compsition law
corresponding to two-dimensional real vector addition, i.e. (x1 , y1 ) + (x2 , y2 ) =
(x1 + x2 , y1 + y2 ). The direct product of a group G with itself G G has a natural
subgroup (G) called the diagonal and defined by (G) {(g, g) GG|g G}.
(4.) If X is a set and G a group such that there exists a map f : X G then the
functions f with the composition law
f1 0 f2 (x) f1 (x) G f2 (x)

(4.43)

where x X form a group. For example if X = S 1 the set of maps of X into G


form the loop group of G.
There are only a finite number of finite simple groups. The quest to identify them all
is universally accepted as having been completed in the 1980s. In addition to groups
such as the cyclic groups Zn , the symmetric group Sn , the dihedral group Dn and
the alternating group An there are fourteen other infinite series and twenty-six other
sporadic groups. These include:
The Matthieu groups (e.g. |M24 | = 21 0.33 .5.7.11.23 = 244, 823, 040),
the Janko groups (e.g. |J4 | 8.67 1019 ),
the Conway groups (e.g. |Co1 | 4.16 1018 ),
the Fischer groups (e.g. |F i24 | 1.26 1024 ) and
the Monster group (|M | 8.08 1053 ).
Definition Let G be a group and X be a set. The (left) action of G on X is a map
taking G X X and denoted2
(g, x) g x Tg (x)

(4.44)

that satisfies
(i.) (g1 g2 ) x = g1 (g2 x) g1 , g2 G, x X
(ii.) e x = x

x X where e is the identity element in G.

The set X is called a (left) G-set.


2

Here we use Tg to denote the left-translation by g, but we could similarly define the right-translation
with the group element acting on the set from the right-hand-side.

4.3. GROUP HOMOMORPHISMS

71

Definition The orbit of x X under the G-action is


G x {x0 X|x0 = g x

g G}.

(4.45)

Definition The stabiliser subgroup of x X is the group of all g G such that gx = x,


i.e.
Gx {g G|g x = x}.

(4.46)

Definition The fundamental domain is the subset XF X such that


(i.) x XF

gx
/ XF

g G\{e} and

(ii.) X = gG g XF .
Examples
(1.) Sn acts on the set {1, 2, 3, . . . n}.
(2.) A group G can act on itself in three canonical ways:
(L)

(i.) left translation: Tg1 (g2 ) = g1 g2 ,


(R)

(ii.) right translation: Tg1 (g2 ) = g2 g1 and


(R)

(L)

(iii.) by conjugation3 : Tg1 Tg1 (g2 ) = g1 g2 g11 Adg1 (g2 ).


1

(3.) SL(2, Z) acts on the set of points in the upper half-plane H {z C|Im(z) > 0}
by the M
obius transformations:


a b
c d

 
az + b
H
,z
cz + d

(4.47)

Problem 4.3.3. Consider the Klein four-group, V4 , (named after Felix Klein) consisting
of the four elements {e, a, b, c} and defined by the relations:
a2 = b2 = c2 = e,

ab = c,

bc = a

and

ac = b

(i.) Show that V4 is abelian.


(ii.) Show that V4 is isomorphic to the direct product of cyclic groups Z2 Z2 . To do
this choose a suitable basis of Z2 Z2 and group composition rule and use it to
show that the basis elements of Z2 Z2 have the same relations as those of V4 .

4.3.1

The First Isomomorphism Theorem

The first isomomorphism theorem combines many of the observations we have made in
the preceeding section.
Theorem 4.3.2. (The First Isomorphism Theorem) Let G and G0 be groups and let
f : G G0 be a group homomorphism. Then the image of f is isomorphic to the coset
G
G
0
group Ker(f
) . If f is a surjective map then G = Ker(f ) .
3

The conjugate action is also called the group adjoint action

72

CHAPTER 4. GROUP THEORY

Proof. Let K denote the kernel of f and H denote the image of f . Define a map
G
: K
H by
(g K) = f (g)
(4.48)
where g G. Let us check that is well-defined in that it maps different elements in a
coset gK to the same image f (g). Suppose that g1 K = g2 K then g11 g2 K and
(g1 K) = f (g1 )

(4.49)

= f (g1 ) 0 e0
= f (g1 ) 0 f (g11 g2 )
= f (g1 g11 g2 )
= f (g2 )
= (g2 K).
is a group homomorphism as
(g1 K) 0 (g2 K) = f (g1 ) 0 f (g2 )

(4.50)

= f (g1 g2 )
= ((g1 g2 ) K)
= ((g1 K) (g2 K))
as K C G. To prove that is an isomorphism we must show it is surjective (onto)
and injective (one-to-one). For any h H we have by the definition of H that there
exists g G such that f (g) = h, hence h = f (g) = (g K) and is surjective. To
show that is injective let us assume the contrary statement that two distinct cosets
(g1 K 6= g2 K) are mapped to the same element f (g1 ) = f (g2 ). As f is a homorphism
f (g11 g2 ) = e0 , hence g11 g2 K and so g1 K = g1 (g11 g2 K) = g2 K
contradicting our assumption that g1 K 6= g2 K. Hence is injective. As is both
surjective and injective it is a bijection. The inverse map 1 (f (g)) = g K is also a
homomorphism:
1 (f (g1 ) 0 f (g2 )) = 1 (f (g1 g2 ))

(4.51)

= (g1 g2 ) K
= (g1 K) (g2 K)
= 1 (g1 K) 0 1 (g2 K))
as well as a bijection. Hence is a group isomorphism and
G
0
onto G0 then H = G0 and Ker(f
) =G.

4.4

G
Ker(f )

= H. If f is surjective

Some Representation Theory

Definition A representation of a group on a vector space V is a group homomorphism


: G GL(V ).
In other words a representation is a way to write the group G as matrices acting on
a vector space which preserves the group composition law. Many groups are naturally

4.4. SOME REPRESENTATION THEORY

73

written as matrices e.g. GL(n, F), SL(n, F), SO(n), O(n), U (n), SU (n) etc. (where
F stands for Z, R, Q, C . . .) however there may be numerous ways to write the group
elements as matrices. In addition not all groups can be represented as matrices e.g. S
(the infinite symmetric group) - try writing out an matrix! Similarly GL(, F),
SL(, F), . . . for that matter. Here V is called the representation space and the dimension of the representation is the dimension of the vector space V , i.e. Dim(V ).
Definition If a representation is such that Ker() = e where e is the identity element
of G then is a faithful representation.
That Ker is trivial indicates that is injective (one-to-one), as suppose was not injective so that (g1 ) = (g2 ) where g1 6= g2 for g1 , g2 G then as is a homomorphism
(g21 g1 ) = I

(4.52)

where I is the identity matrix acting on V . Hence g21 g1 Ker() and the kernel
would be non-trivial.
Definition A representation 1 (G) GL(V1 ) is equivalent to a second representation
2 (G) GL(V2 ) if there exists an invertible linear map T : V1 V2 such that
T 1 (g) = 2 (g)T

gG

(4.53)

The map T is called the intertwiner of the representations P i1 and P i2 .


Definition W V is an invariant subspace of a representation : G GL(V ) if
(g)W W for all g G.
W is called a subrepresentation space and if such an invariant subspace exists evidently
one can trivially construct a representation of G whose dimension is smaller than that
of (as Dim(W ) < Dim(V )) by restricting the action of to its action on W . The
representations which possess no invariant subspaces are special.
Definition An irreducible representation : G GL(V ) contains no non-trivial invariant sub-spaces in V .
That is there do not exist any subspaces W V such that (g)W W g G
except W = V or W = {e}. The irreducible represesntations are often referred to by
the shorthand irrep and they are the basic building blocks of all the other reducible
representations of G. They are the prime numbers of representation theory.

4.4.1

Schurs Lemma

Theorem 4.4.1. (Schurs lemma first form) Let 1 : G GL(V ) and 2 : G


GL(W ) be irreducible representations of G and let T : V W be an intertwining map
between 1 and 2 . Then either T = 0 (the zero map) or T is an isomorphism.
Proof. T is an intertwining map so T 1 (g) = 2 (g)T for all g G. First we show that
Ker(T ) is an invariant subspace of V as if v Ker(T ) then T v = 0 (as the identity
element on the vector space is the zero vector under vector addition), therefore
T 1 (g)v = 2 (g)T (v) = 0

1 (g)v Ker(T ) v Ker(T ).

(4.54)

74

CHAPTER 4. GROUP THEORY

Hence Ker(T ) is an invariant subspace of V under the action of 1 (G). As 1 (G) is an


ireducible representation of G then Ker(T ) = {0} or V . If Ker(T ) = V then T is a map
sending all v V to 0 W (the zero map) and T = 0. If Ker(T ) = 0 V then T is an
injective map. If T is injective and in addition surjective then it is an isomorphism, so
it remains for us to show that if T is not the zero map it is a surjective map. We will
do this by proving that the image of T is an invariant subspace of W . Let the image of
a vector v V be denoted w W , i.e. T (v) = w then
2 (g)w = 2 (g)T (v) = T (1 (g)v) Im(T )

gG

(4.55)

and so the image of T is an invariant subspace of W . As 2 is an irreducible representation then it has no non-trivial invariant subspaces, hence Im(T ) = {0} or W . If the
image of T is the zero vector then T is the zero map, otherwise if the image of T is W
then T is a surjective map. Consequently either T = 0 or T is an isomorphism between
V and W .
Theorem 4.4.2. (Schurs lemma second form) If T : V V is an intertwiner from an
irreducible representation to itself and V is a finite-dimensional complex vector space
then T = I for some C.
Proof. We have T (g) = (g)T and as V is a complex vector space then one can always
solve the equation det(T I) = 0 to find a complex eigenvalue 4 . Hence T v = v
where v is an eigenvector of T and
T (g)v = (g)T v = (g)v

gG

(4.56)

So (g)v is another eigenvector for T with eigenvalue . Hence the -eigenspace of T


is an invariant subspace of (G). As is an irreducible representation then the eigenspace of T is either {0} or V itself. If we assume V to be non-trivial then at least
one eigenvalue exists and so the -eigenspace of T is V itself. Therefore
T v = v

vV

T = I.

(4.57)

A corollary of Schurs lemma is that if there exist a pair of intertwining maps T1 :


V W and T2 : V W which are both non-zero then T1 = T2 for some C. For if
T2 is non-zero then it is an isomorphism of V and W and its inverse map T21 : W V
is also an interwtwiner. Now
T1 T21 2 (g) = T1 1 (g)T21 = 2 (g)T1 T21

(4.58)

hence T1 T21 : W W and by Schurs lemma (second form) we have T1 T21 = I and
so T1 = T2 for some C.
Problem 4.4.1. If (G) is a finite-dimensional representation of a group G, show that
the matrices (g) also form a representation, where (g) is the complex-conjugate of
(g).
4

This gives a polynomial in which always has a solution over C, or indeed over any algebraically
closed field.

4.4. SOME REPRESENTATION THEORY

75

Problem 4.4.2. The representation (g) may or may not be equivalent to (g). If
they are equivalent then there exists an intertwining map, T , such that:
(g) = T 1 (g)T
Show that if (g) is irreducible then T T = I
Problem 4.4.3. If (g) is a unitary representation on Cn show that T T = I. (Hint:
Make use of the fact that the inner product on Cn is < v, w >= v w where v, w Cn
to find a relation between and .) Show that T may be redefined so that = 1 and
that T is either symmetric or antisymmetric.
Problem 4.4.4. Let G be an abelian group. Show that
(g2 ) = (g1 )1 (g2 )(g1 )
where g1 , g2 G and is an irreducible representation of G. Hence show that every
complex irreducible representation of an abelian group is one-dimensional by proving
that (g) = I for all g G where C.
Problem 4.4.5. Prove that a representation of G of dimension n + m having the form:
!
A(g) C(g)
(g) =
gG
0
B(g)
is reducible. Here A(g) is an n n matrix, B(g) is an m m matrix, C(g) is an n m
matrix and 0 is an empty m n matrix where n and m are integers and n > 0.
Problem 4.4.6. The affine group consists of affine transformations (A, b) which act on
a D-dimensional vector x as:
(A, b)x = Ax + b
Find, with justification, a (D + 1)-dimensional reducible representation of the affine
group of transformations.
Definition Let V be a vector space endowed with an inner product < , >. A representation : G GL(V ) is called unitary if (g) are unitary operators i.e.
< (g)v, (g)w >=< v, w >

g G,

v, w V.

(4.59)

Definition Let : G GL(V ) be a representation on a finite-dimensional vector


space V , then the character of is the function : G C defined by
(g) = T r((g))

(4.60)

where T r is the trace.


Notice that (e) = T r((e)) = T r(I) = Dim(V ) is the dimension of the representation.
The character is constant on the conjugacy classes of a group G as
(g h g 1 ) = T r((g h g 1 ))
= T r((g)(h)(g 1 ))
= T r((h))
= (h).

(4.61)

76

CHAPTER 4. GROUP THEORY

where we have used the cyclicty of the trace. Any function which is invariant over the
conjugacy class is called a class function. If is a unitary representation then
(g 1 ) = T r((g 1 )) = T r((g)1 ) = T r((g) ) = (g) = (g).

(4.62)

If 1 and 2 are equivalent representations (with intertwinging map T ) then they have
the same characters as
1 (g) = T r(1 (g))

(4.63)

= T r(T 1 2 (g)T )
= T r(2 (g))
= 2 (g)
and conversely if two representations of G have the same characters for all g G then
they are equivalent representations.

4.4.2

The Direct Sum and Tensor Product

Given two representations 1 : G GL(V1 ) and 2 : G GL(V2 ) of a group G one


can form two important representations:
1. The direct sum, 1 2 : G GL(V1 V2 ) such that (1 2 )(g) = 1 (g)
2 (g). This is a homomorphism as
(1 2 )(g1 g2 ) =
=
=

1 (g1 g2 )
0
0
2 (g1 g2 )

!
(4.64)

!
1 (g1 )1 (g2 )
0
0
2 (g1 )2 (g2 )
!
!
1 (g1 )
0
1 (g2 )
0
0
2 (g1 )
0
2 (g2 )

= (1 2 )(g1 )(1 2 )(g2 )


If V1 is the vector space with basis {e1 , e2 , . . . en } and V2 is the vector space with
basis {f1 , f2 , . . . fm } then V1 V2 has the basis {e1 , e2 , . . . en , f1 , f2 , . . . fm }, i.e. we
can write this using the direct product as V1 V2 {(v1 , v2 ) V1 V2 |v1 V1 , v2
V2 } with vector addition and scalar mulitplication acting as
(v1 , v2 ) + (v10 , v20 ) = (v1 + v10 , v2 + v20 )

(4.65)

a(v1 , v2 ) = (av1 , av2 )


where v1 , v10 V1 , v2 , v20 V2 and a is a constant. In this notation the basis of
V1 V2 is
{(e1 , 0), (e2 , 0), . . . (en , 0), (0, f1 ), (0, f2 ), . . . (0, fm )}
= {e1 , e2 , . . . en , f1 , f2 , . . . fm }.
Hence Dim(V1 V2 ) = Dim(V1 ) + Dim(V2 ) = n + m.

4.4. SOME REPRESENTATION THEORY

77

Example Let G be Z2 {e, g|e = Id, g 2 = e} with V1 = R1 and V2 = R2 so that


1 (g) = 1

1 (e) = 1,
1 0
0 1

2 (e) =
now V1 V3 = R3 with

!
,

1 0
0 1

2 (g) =

1 0 0

(1 2 )(e) = 0 1 0 ,
0 0 1

(4.66)
!

1 0
0

2 (g) = 0 1 0 .
0
0 1

(4.67)

2. The tensor product, 1 2 : G GL(V1 V2 ) such that (1 2 )(g) =


1 (g) 2 (g). The tensor product is the most general blinear product and so its
defintion may seem obscure at first sight. This is a homomorphism as
(1 2 )(g1 g2 ) = 1 (g1 g2 ) 2 (g1 g2 )

(4.68)

= 1 (g1 )1 (g2 ) 2 (g1 )2 (g2 )


= (1 2 )(g1 )(1 (g2 ) 2 (g2 ))
= (1 2 )(g1 )(1 2 )(g2 )
If V1 is the vector space with basis {e1 , e2 , . . . en } and V2 is the vector space with
basis {f1 , f2 , . . . fm } then V1 V2 has the basis
{e1 f1 , e1 f2 , . . . e1 fm , e2 f1 , e2 f2 , . . . e2 fm , . . . , en f1 , en f2 , . . . en fm }
i.e. the basis is {ei ej |i = 1, 2, . . . Dim(V1 ), j = 1, 2, . . . Dim(V2 )}. Hence
Dim(V1 V2 ) = Dim(V1 ) Dim(V2 ) = nm. The tensor product of two vector spaces V and W satisfies
(v1 + v2 ) w1 = v1 w1 + v2 w1

(4.69)

v1 (w1 + w2 ) = v1 w1 + v1 w2
av w = v aw = a(v w)
where v, v1 , v2 V , w, w1 , w2 W and a is a constant.
Example As for the direct sum consider the example where G is Z2 and 1 and
2 are the representations given explicitly in equation (4.66) above. Then the
basis elements for V1 V2 are {e1 f1 , e1 f2 } where e1 is the basis vector for R
and {f1 , f2 } are the basis vectors for R2 and the tensor product representation is
!
!
1 0
1 0
(1 2 )(e) = 1
,
(1 2 )(g) = 1
.
0 1
0 1
These act on R R2 by
(1 2 )(e)(v1 v2 ) = v1 v2 ,
(1 2 )(g)(v1 v2 ) = v1

(4.70)
1 0
0 1

!
v2 = v1 v2

78

CHAPTER 4. GROUP THEORY


which is the trivial representation acting on the two-dimensional vector space
R R2
= R2 . A slightly less trivial example involves the representation 3 of Z2
on R2 given by
!
!
1 0
1 0
3 (e) =
,
3 (g) =
.
(4.71)
0 1
0 1
The tensor product representation 1 3 acts on R2 as
!
1 0
(1 3 )(e) = 1
,
(1 3 )(g) = 1
0 1

1 0
0 1

these act on R R2 by
(1 3 )(e)(v1 v2 ) = v1 v2 ,
(1 3 )(g)(v1 v2 ) = v1

(4.72)
1 0
0 1

!
v2 = v1

1 0
0 1

!
v2

which is non-trivial.
One may introduce scalar products on the direct sum and tensor product spaces:
< v1 w1 , v2 w2 >V W < v1 , v2 >V + < w1 , w2 >W

(4.73)

< v1 w1 , v2 w2 >V W < v1 , v2 >V < w1 , w2 >W


as well as the character function:
1 2 (g) = T r(1 (g)) + T r(2 (g))

(4.74)

1 2 (g) = T rV (1 (g))T rW (2 (g)).


One might think that all the information about these product representations is contained already in V and W . However consider the endomorphisms (the homomorphisms
from a vector space to itself5 ) of V W , denoted End(V W ). Any A End(V W )
may be written
!
AV V AV W
A=
(4.75)
AW V AW W
where AV V : V V , AV W : V W etc. that is AV V End(V ) and AW W
EndW do not generate all the endomorphisms of V W (note that if Dim(V ) = n
and Dim(W ) = m then Dim(End(V W )) = (n + m)2 n2 + m2 = Dim(End(V )) +
Dim(End(W )). On the other hand the endomorphisms of V and W do generate all the
endomorphisms of the tensor product space V W as Dim(End(V W )) = n2 m2 =
Dim(End(V ))Dim(End(W )).
The direct sum never gives an irreducible representation, having two non-trivial
subspaces V 0
= V and 0 W
= W . It is less straightforward with the tensor
product to discover whether or not it gives an irreducible representation. Frequently
one is interested in decomposing the tensor product into direct sums of irreducible subrepresentations:
V W = U1 U2 . . . Un .
(4.76)
5

If an endomorphism is invertible then the map is an automorphism.

4.4. SOME REPRESENTATION THEORY

79

To do this one must find an endomorphism (a change of basis) of V W such that


1 (g)
2 (g) . . .
n (g)
T (1 2 (g))T 1 =

(4.77)

where T End(V W ). The decomposition


(G) (G) =

i (G)
ai

(4.78)

is called the Clebsch-Gordan decomposition. This is not always possible. One can
achieve this decomposition for one example central to quantum mechanics G = SU (2).
It is a fact (which we will not prove here) that SU (2) has only one unitary irreducible representation for each vector space of dimension Dim(V ) n + 1. This n + 1-dimensional
representation is isomorphic to a representation of the irreducible representations of
SO(3) associated to angular momentum in quantum mechanics due to the group isomor(2)
phism SU
Z)2 = SO(3) which will be shown explicitly later in this chapter. In summary
representations of SU (2) may be labelled by Dim(V ) = n + 1 and the equivalent SO(3)
representation is labelled by spin j. In fact j = n2 hence as n Z+ then j may take
half-integer (fermions) as well as integer (bosons) values. When j = 0 then n = 0 so
Dim(V ) = 1 is the trivial representation of SU (2); j = 12 then n = 1 and Dim(V ) = 2
giving the fundamental or standard representation of SU (2) as a two-by-two matrix;
and when j = 1 then n = 2 giving Dim(V ) = 3 is called the adjoint representation of SU (2). The Clebsch-Gordan decomposition rewrites the tensor product of two
SU (2) irreducible representations [j1 ] and [j2 ], labelled using the spin, as a direct sum
of irreducible representations:
[j1 ] [j2 ] = [j1 + j2 ] [j1 + j2 1] . . . [|j1 j2 |].

(4.79)

Some simple examples are


[0] [j] = [j]

(4.80)

One can quickly check that the tensor product has the same dimension as the direct sum.
Note that Dim[j] = Dim(V ) = n + 1 = 2j + 1 so that Dim([0] [j]) = 1 (2j + 1) =
Dim[j]. Another example short example is
1
1
1
[ ] [j] = [ + j] [ + j]
2
2
2

(4.81)

where we have Dim([ 21 ] [j]) = (2 12 + 1)(2j + 1) = 4j + 2 while the direct sum of


representations has Dim([ 21 +j][ 21 +j]) = (2( 12 +j)+1)+(2( 21 +j)+1) = 4j+2. Notice
that the tensor products of the fundamental representation [ 21 ] with itself generates
all the other irreducible representations of SU (2) that is

Dimensions:

Dimensions:

1
1
[ ] [ ] = [1] [0]
2
2
2 2 = 3 + 1
1
3
1
[1] [ ] = [ ] [ ]
2
2
2
3 2 = 4 + 2.

(4.82)

80

CHAPTER 4. GROUP THEORY

For other groups the decomposition theory is more involved. To work out the ClebschGordan coefficients one must know the inequivalent irreducible representations of the
group, its conjugacy classes and its character table. If a representation of a group
itself may be rewritten as a sum of representations it is by definition not an irreducible
representation - it is called a reducible representation.
Definition A representation : G GL(Vn Vm ) on a vector space of dimension
n + m is reducible if (g) has the form
(g) =

A(g) C(g)
0
B(g)

!
gG

(4.83)

where A is an n n matrix, B is an m m matrix, C is an n m matrix and 0 is the


empty m n matrix.
Notice that
A(g) C(g)
0
B(g)

vn
0m

!
=

A(g)vn
0m

!
(4.84)

where 0m Vm is the m-dimensional zero vector and vn Vn is an n-dimensional vector.


So we see that Vn is an invariant subspace of and so is reducible. Furthermore if
we multiply two such matrices together we have
!
!
A(g1 ) C(g1 )
A(g2 ) C(g2 )
(g1 )(g2 ) =
(4.85)
0
B(g1 )
0
B(g2 )
!
A(g1 )A(g2 ) A(g1 )C(g2 ) + C(g1 )B(g2 )
=
0
B(g1 )B(g2 )
= (g1 g2 )
=

A(g1 g2 ) C(g1 g2 )
0
B(g1 g2 )

hence we see that A(g1 g2 ) = A(g1 )A(g2 ) and A(g) is representation of G on the
invariant subspace Vn . For finite groups the matrix C is equivalent to the null matrix
(by Maschkes theorem all reducible representations of a finite group are completely
reducible). In this case the representation is said to be completely reducible:
(g) = A(g) B(g).

(4.86)

It does not follow that A(G) and B(G) are themselves irreducible, but if they are not
then the process may be repeated until (G) is expressed as a direct sum of irreducible
representations.

4.5

Lie Groups

Many of the groups we have met so far have been parameterised by discrete variables
e.g. {e, g, g 2 } for Z3 but frequently a number of group actions we have met, e.g. So(n),
SU (n), U (n), Sp(n), have been described by continuous parameters. For example SO(2)

4.5. LIE GROUPS

81

describing rotations of S 1 is parameterised by which takes values in the continuous


set [0, 2) and for each value of we find an element of SO(2):
!
cos() sin()
R() =
(4.87)
sin() cos()
(one may check that R()RT () = I and Det(R()) = 1). R() is a two-dimensional
representation of the abstract group SO(2). We may check that is a faithful representation of SO(2): R(0) = I and the kernel of the representation is trivial for [0, 2).
Incidentally the two-dimensional representation is
reducible
! irreducible over
! R but it is !
i
z
x + iy
re
over C. Over C we take as column vector
=
=
and an

z
x iy
rei
SO(2) rotation takes
!
!
!
z
z0
rei(+)

=
(4.88)
z
z 0
rei(+)
that is
R(, C) =

ei
0
i
0 e

!
(4.89)

There is a qualitative difference when we move from R to C as this matrix is block diagonal and hence reducible into two one-dimensional complex representations of U (1)
=
SO(2). Geometrically the parameter defining the rotation parameterises the circle S 1 .
For other continuous groups we may also make an identification with a geometry e.g.
R\0 under multiplication is associated with two open half-lines
(the real line with zero
!

|||2 + ||2 = 1} which as a
removed), a second example is SU (2) = {

set parameterises S 3 . The proper notion for the geometric setting is the manifold and
each group discussed above is a manifold. Any geometri space one can imagine can be
embedded in some Euclidean Rn as a surface of some dimensions less than or equal to n.
For example the circle S 1 R2 and in general S n1 Rn . No matter how extraordinary
the curvature of the surface (so long as it remains well-defined) a manifold will have the
appearance of being a Euclidean space at a sufficiently local scale. Consider S 1 R2
sufficiently close to a point on S 1 , the segment of S 1 appears identical to R1 . The geometry of a manifold is found by piecing together these open and locally-Euclidean stes.
Each open neighbourhood is called a chart and is equipped with a map that converts
points p M , where M is the manifold, to local Euclidean coordinates. Using these local coordinates one can carry out all the usual mathematics in Rn . The global structure
of a manifold is defined by how these open sets are glued together. Since a manifold is a
very well-defined structure these transition functions, encoding the gluing, are smooth.
The study of manifolds is the beginning of learning about differential geometry.
Definition A Lie group is a differentiable manifold G which is also a group such that
the group product G G G and the inverse map g g 1 are differentiable.
We will restrict our interest to matrix Lie groups in this foundational course, these are
those Lie groups which are written as matrices e.g. SL(n, F), SO(n), SU (n), Sp(n).

82

CHAPTER 4. GROUP THEORY

Definition A matrix Lie group G is connected if given any two matrices A and B in G,
there exists a continuous path A(t) with 0 t 1 such that A(0) = A and A(1) = B.
A matrix Lie group which is not connected can be decomposed into several connected
pieces.
Theorem 4.5.1. If G is a matrix Lie group then the component of G connected to the
identity is a subgroup of G. It is denoted G0 .
Proof. Let A(t), B(t) G0 such that A(0) = I, A(1) = A, B(0) = I and B(1) = B are
continuous paths. Then A(t)B(t) is a continuous path from I to AB. Hence G0 is closed
and evidently I G0 . Also A1 (t) = A(t) is a continuous path from I to A1 G0
defined by A(t)A(t) = I.
The groups GL(n, C), SL(nC, SL(n, R), SO(n), U (n) and SU (n) are connected
groups. While GL(n, R and O(n) are not connected. For example one can convince
oneself that O(n) is not connected by supposing that A, B O(n) such that Det(A) =
+1 and Det(B) = 1. Then any path A(t) such that A(0) = A and A(1) = B would
give a continuous function Det(A(t)) passing from 1 to 1. Since A O(n) satisfy
Det(A) = 1 then no such set of matrices forming a continuous path from A to B exist.
A similar argument can be made for GL(n, R) splitting it into components with Det > 0
and Det < 0.

4.6

Lie Algebras: Infinitesimal Generators

Let us now return to thinking like physicists. From this perspective we would like
to think of Lie groups as continuous actions that can be realized by an infinitesimal
transformation
g = 1 + iT + . . . ,

(4.90)

where the ellipsis denotes higher order terms in  << 1. The factor of i is for later
convenience. Here we think of g in terms of some representation. Thus we really should
write
(g) = 1 + iT + . . . ,

(4.91)

so that T is a matrix and 1 is the identity matrix. However as physicists we will forget
that we are talking about representations since what we say applies to any representation. In general g is subject to some restriction such as unitarity. Thus the set of T s
that one finds is restricted. This defines the Lie algebra Lie(G): its the set of operators
T that are required to generate the group infinitesimally.
There is an analogous notion of a representation of the Lie algebra to that of a
representation of a group. definition a representation of a Lie-algebra is a map :
Lie(G) GL(V ) such that [A, B] = [(A), (B)]
Let us look at an example: U (N ) = {N N complex matrices g|g = g 1 }. This is
a group since 1 U (N ). By construction if g U (N ) then g 1 U (N ) as (g 1 ) = g.
Finally if g1 , g2 U (N ) then (g1 g2 )1 = g21 g11 = g2 g1 = (g1 g2 ) . What is the condition

4.6. LIE ALGEBRAS: INFINITESIMAL GENERATORS

83

1 AN = su(N + 1) {M = (N + 1) (N + 1) matrix|M = M , trM = 0}


2 BN = so(2N + 1) {M = (2N + 1) (2N + 1) matrix|M T = M }
3
4
5
6
7

CN = sp(2N )
DN = so(2N )
E6 , E7 , E8
F4
G2

{J = 2N 2N matrix|J + J = 0} , =
{M = (2N ) (2N ) matrix|M

0
1N N

1N N
0

= M }

Table 4.6.1: The classification of semi-simple Lie-algebras


that g = 1 + iT U (N )? Well first note that the inverse of g is g 1 = 1 iT since
gg 1 = (1 + iT )(1 iT ) = 1 + . . .
g 1 g = (1 iT )(1 + iT ) = 1 + . . . .
Thus for g U (N ) we require that
g = g 1

1 iT = 1 iT

T = T

(4.92)

So the Lie algebra Lie(G) is the space of Hermitian matrices.


As we noted above a group always acts on itself via conjugation. Thus if we have
g G and consider an infinitesimal conjugation by h = 1 + iU . Thus conjugation
amounts to
g hgh1
= (1 + iU )g(1 iU )
= g + i(U g gU ) + . . .
= g + i[U, g] + . . . .

(4.93)

If we further expand g = 1 + iT the group action induces a commutator structure on


the Lie algebra since i[U, T ] Lie(G). Thus if we have a basis Ta of Lie(G) then there
must exist constants, called structure constants, such that
[Ta , Tb ] = ifab c Tc .

(4.94)

Since we are considering matrices the product is automatically associative and a


simple expansions shows that the brackets satisfy the Jacobi identity:
[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0

(4.95)

More generally (i.e. more abstractly) one must require this in addition. In other words
a Lie algebra is a vector space with an anti-symmetric product [, ] that satisfies the
Jacobi identity. It turns out that the tangent space to a Lie group at the identity is a
Lie algebra.
There is a classification of semi-simple Lie algebras, that is to say ones that are
not direct sums of smaller Lie algebras. There are four infinite families along with five
exceptional cases. These are listed in table (4.6)

84

CHAPTER 4. GROUP THEORY

You are presumably familiar with su(N + 1), so(2N + 1) and so(2N ) that arise from
the groups SU (N + 1), SO(2N + 1) and SO(2N ). The symplectic algebra sp(2N ) arises,
for example, in Hamiltonian dynamics where the vector space R2N is the phase space
that comes from combining (qi , pi ) into a single 2N vector. The matrix then arises
analogously to an inner product through {qi , pj } = {pj , qi } = ij and is known as a
symplectic product. Unfortunately the exceptional Lie algebras E6 , E7 , E8 , F4 , G2 do
not have a simple definition that we can give here.
What is the number associated to each Lie algebra? That is called the rank and is
defined as the dimension of the Cartan subalgebra. What is the Cartan subalgebra?
It is the maximal subspace of the Lie algebra that is spanned by mutually commuting
generators.
Let us not continue with generalities and simply deal in detail with the simplest
Lie groups: SU (2) and SO(3) and their Lie algebras su(2) and so(3). We will see
that they have the same Lie algebra but they are not equal as groups. Rather there
is a 2-1 homeomorphism from SU (2) SO(3). The reason that two different groups
can have the same Lie-algebra is because the Lie algebra only encodes infinitesimal
transformations and the finite transformations can differ.

4.7

Everything you wanted to know about SU (2) and SO(3)


but were afraid to ask

First we start with SU (2): Definition: SU (2) = {2 2 complex matrices g|g =


g 1 and det g = 1}.
It is natural to think of this as also defining a representation of SU (2) in terms of
its action on vectors in C2 . But that would be getting ahead of ourselves there are in
fact infinitely many representations that we will construct later.
Next we compute the Lie algebras. Clearly SU (2) U (2) and hence it we write
g = i + iT we require T = T . We also have an extra condition:
det g = det(1 + iT ) = 1 + itr(T ) + . . .

(4.96)

Thus we require that tr(T ) = 0 in addition to T = T . The Pauli matrices form a


natural basis for su(2):
!
!
!
0 1
0 i
1 0
1 =
2 =
3 =
(4.97)
1 0
i 0
0 1
Thus any complex, traceless, Hermitian, 2 2 matrix is a real linear combination of the
i :
1
i = i .
(4.98)
T su(2)

T = i i
2
The appearance of 1/2 will be apparent later. A little calculation shows that
h i
k
j
i
,
= iijk
.
(4.99)
2 2
2
To obtain group elements we exponentiate:
g = ei

i /2
i

(4.100)

4.7. EVERYTHING YOU WANTED TO KNOW ABOUT SU (2) AND SO(3) BUT WERE AFRAID TO ASK
This is defined as a infinite sum but it always converges. If we write
|| =

p
(1 )2 + (2 )2 + (3 )2

= /||
n

then an adaptation of the famous ei = cos + i sin formula gives


 
 
||
||
+ i
n sin
.
g = cos
2
2

(4.101)

(4.102)

which still satisfies I 2 = 1.


In particular all we have done is replaced i by I = n
Here we see some global structure: || [0, 4) covers all of SU (2).
Now let us turn to SO(3). Definition: SO(3) = {3 3 real matrices g|g T =
g 1 and det g = 1}.
In our conventions with g = 1 + iT we see that T is pure imaginary and antisymmetric T T = T . A natural basis is

0 0 0
0 0 i
0 i 0

L1 = 0 0 i , L2 = 0 0 0 and L3 = i 0 0 . (4.103)
0 i 0
i 0 0
0 0 0
so that
T =L

(4.104)

To find the group element we exponentiate again:


g = eiL

(4.105)

this does not have a simple expression analogous to the one we found for SU (2). However
we observe that since T T = T and T is pure imaginary we have that T is Hermitian.
The eigenvalues of T come in pairs differing by a sign. To see this we look at the
characteristic polynomial:
0 = det(T 1) = det((T 1)T ) = det(T 1)

0 = det(T + 1)

(4.106)

Thus in odd dimensions there must be a zero eigenvalue. The corresponding eigenvector
is invariant under the rotation. Thus in three-dimensions all rotations are the more
familiar two-dimensional rotations about some fixed axis. Let us fix the rotation to be
about the x3 axis so that

0
3 0
cos 3 sin 3 0
3

g = ei L3 = exp 3 0 0 = sin 3 cos 3 0 .


(4.107)
0
0 0
0
0
1
Thus we see that || [0, 2) covers the group.

4.7.1

SO(3) = SU (2)/Z2

Let us look at the Lie-algebra so(3). By explicit calculation we can see that
[Li , Lj ] = iijk Lk .

(4.108)

86

CHAPTER 4. GROUP THEORY

This is the same as su(2). Thus su(2)


= so(3).
Given the isomorphism between the two Lie algebras we may wonder whether the two
groups SU (2) and SO(3) are isomorphic. To do this we look for a group homomorphism
: SU (2) SO(3) derived from the Lie algebra isomorphism ( 2i ) = Li and given by
i||
n
)) = exp (i||
n L)
(4.109)
2
where L is the vector whose components are the matrices Li which form a basis for the
Lie algebra of SO(3). The matrix exp (i||
n L) is a rotation about the axis parallel
with n
of angle ||. While we know that
(exp (

exp (

i||
||
||
n
) = cos ( )I + i
n sin ( )
2
2
2

(4.110)

which covers the group elements of SU (2) when 0 ||


2 < 2 i.e. when 0 < 4. On
the other hand this range of alpha corresponds to roatations with angle 0 < 4 in
SO(3) under the homomorphism. That is the homomorphism gives a double-covering of
SO(3). The kernel of the homomorphism is non-trivial. Due to the geometrical intuition
we have of the rotations in SO(3) we know that a rotation by 2 is the identity element,
thus we quickly identify the kernel of to be where
|| = 0, 2 .

(4.111)

Although these are trivial rotations in SO(3) from (4.110) we see that
i||
n
) = I, I .
(4.112)
2
This is the centre of SU (2), namely the set of elements in SU (2) that commute with
all other elements. Thus the kernel of is {I, I}
= Z2 . So by the first isomorphism
theorem we have
SU (2)
(4.113)
= SO(3).
Z2
Let us summarise our observations. We commenced with an isomorphism between
representations of two Lie algebras and we wondered whether it extended by the exponential map to an isomomorphism between the representations of the Lie groups.
However the identification of the group representation (which is informed by the global
group structure) with the exponentiation of the Lie algebra representation is only possible for a certain class of groups. Such groups are called simply-connected and in addition
to being connected, every closed loop on them may be continuously shrunk to a point.
In this class of groups one can make deductions about the global group structure from
the local knowledge of the Lie algebra. We will not discuss simple-connectedness in any
detail here, but in the example above both SU (2) and SO(3) are connected but only
SU (2) is simply-connected. Hence for SU (2) we may identify the representations of the
group with those of the algebra but for SO(3) we may not. A Lie algebra homomorphism does not in general give a Lie group homomorphism. However if G is a connected
called the universal
group then there always exists a related simply-connected group G
covering group for which the Lie algebra homomorphism does extend to a Lie group
homomorphism. Above we see that SU (2) is the universal covering group of SO(3).
The double cover of the group SO(p, q) is the universal covering group of SO(p, q) and
is called Spin(p, q), hence here we see that Spin(3)
= SU (2).
exp (

4.7. EVERYTHING YOU WANTED TO KNOW ABOUT SU (2) AND SO(3) BUT WERE AFRAID TO ASK

4.7.2

Representations

Next we wish to construct all finite dimensional unitary representations of su(2). Exponentiation lifts these to representations of SU (2). We can then ask which ones lift to
representations of SO(3). To do this we will proceed as we did above for the harmonic
oscillator.
Let us suppose that we are given matrices Ji that satisfy [Ji , Jj ] = iijk Jk . Since we
want a unitary representation we assume that Ji = J i but we do not know anything
else yet and we certainly dont assume that they are 2 2 or 3 3 matrices as above.
First note that
J 2 = (J1 )2 + (J2 )2 + (J3 )2 ,
(4.114)
is a Casimir. That means it commutes with all the generators
X
[J 2 , Ji ] =
[Jj2 , Ji ]
j

Jj [Jj , Ji ] + [Jj , Ji ]Jj

Jj jik Jk + jik Jk Jj

jk

jik (Jj Jk + Jk Jj )

jk

=0.

(4.115)

From Schurs lemma this means that J 2 = I in any irreducible representation.


Since the Ji are Hermitian we can chose to diagonalise one, but only one since su(2)
has rank 1, say J3 . Thus the representation has a basis of states labelled by eigenvalues
of J3 :
J3 |mi = m|mi .
(4.116)
In analogy to the harmonic oscillator we swap J1 and J2 for operators

J+
= J .

J = J1 iJ2

(4.117)

Notice that
[J3 , J ] = [J3 , J1 J2 ]
= [J3 , J1 ] [J3 , J2 ]
= iJ2 J1
= J .

(4.118)

We can therefore use J to raise and lower the eigenvalue of J3 :


J3 (J |mi) = ([J3 , J ] + J J3 )|mi
= (J + mJ )|mi
= (m 1)(J |mi)

(4.119)

Therefore we have
J+ |mi = cm |m + 1i

J |mi = dm |m 1i ,

(4.120)

88

CHAPTER 4. GROUP THEORY

where the constants cm and dm are chosen to ensure that the states are normalized
(we are assuming for simplicity that the eigenspaces of J3 are one-dimensional - we will
return to this shortly).
To calculate cm we evaluate

|cm |2 hm + 1|m + 1i = hm|J+


J+ |mi

= hm|J J+ |mi
= hm|(J1 iJ2 )(J1 + iJ2 )|mi
= hm|J12 + J22 + i[J1 , J2 ]|mi
= hm|J 2 J32 J3 |mi
= ( m2 m)hm|mi
Thus if hm|mi = hm + 1|m + 1i = 1 we find that
p
cm = m2 m .

(4.121)

(4.122)

Similarly for dm :

|dm |2 hm 1|m 1i = hm|J


J |mi

= hm|J+ J |mi
= hm|(J1 + iJ2 )(J1 iJ2 )|mi
= hm|J12 + J22 i[J1 , J2 ]|mi
= hm|J 2 J32 + J3 |mi
= ( m2 + m)hm|mi

(4.123)

p
m2 + m .

(4.124)

So that
dm =

Thus we see that any irrep of su(2) is labelled by and has states with J3 eigenvalues
m, m1, m2m, . . .. If we look for finite dimensional representations then there must be
a highest value of J3 -eigenvalue mh and lowest value ml . Furthermore the corresponding
states must satisfy
J+ |mh i = 0

J |ml i = 0

(4.125)

This in turn requires that cmh = dml = 0:


mh (mh + 1) = 0

and

ml (ml 1) = 0 .

(4.126)

This implies that


= mh (mh + 1)

(4.127)

mh (mh + 1) = ml (ml 1) .

(4.128)

and also that

This is a quadratic equation for ml as a function of mh and hence has two solutions.
Simple inspection tells us that
ml mh

or

ml = mh + 1 .

(4.129)

4.7. EVERYTHING YOU WANTED TO KNOW ABOUT SU (2) AND SO(3) BUT WERE AFRAID TO ASK
The second solution is impossible since ml mh and hence the spectrum of J3 eigenvalues is:
mh , mh 1, ..., mh + 1, mh ,

(4.130)

with a single state assigned to each eigenvalue. Furthermore there are 2mh + 1 such
eigenvalues and hence the representation has dimension 2mh + 1. This must be an
integer so we learn that
2mh = 0, 1, 2, 3.... .

(4.131)

We return to the issue about whether or not the eigenspaces |, mi can be more
than one-dimensional. If space of eigenvalues with m = mh is N -dimensional then
when we act with J we obtain N -dimensional eigenspaces for each eigenvalue m. This
would lead to a reducible representation where one could simply take one-dimensional
subspaces of each eigenspace. Let us then suppose that there is only a one-dimensional
eigenspace for m = mh , spanned by |, mh i. It is then clear that acting with J produces
all states and each eigenspace of J3 has only a one-dimensional subspace spanned by
|, mi (J )n |, mh i for some n = 0, 1, ..., 2 + 1.
In summary, and changing notation slightly to match the norm, we have obtained a
(2l+1)-dimensional unitary representation determined by any l = 0, 12 , 1, 32 , ... having the
Casimir J 2 = l(l+1). The states can be labelled by |l, mi where m = l, l+1, ..., l1, l.
Let us look at some examples.
l = 0: Here we have just one state |0, 0i and the matrices Ji act trivially. This is the
trivial representation.
l = 1/2: Here we have 2 states:
|1/2, 1/2i =

1
0

!
|1/2, 1/2i =

0
1

!
.

(4.132)

By construction J3 is diagonal:
J3 =

1/2
0
0 1/2

!
.

(4.133)

We can determine J+ through


J+ |1/2, 1/2i = 0

J+ |1/2, 1/2i =

p
3/4 1/4 + 1/2|1/2, 1/2i = |1/2i

(4.134)

so that
J+ =

0 1
0 0

!
.

(4.135)

And can determine J through


J |1/2, 1/2i =

p
3/4 1/4 + 1/2|1/2, 1/2i

J |1/2, 1/2i = 0

(4.136)

so that
J =

0 0
1 0

!
.

(4.137)

90

CHAPTER 4. GROUP THEORY

Or alternatively
0 1
1 0

1
1
J1 = (J+ + J ) =
2
2
1
1
J2 = (J+ J ) =
2i
2

0 i
i 0

!
(4.138)

Thus we have recovered the Pauli matrices.


Problem: Obtain the 3 3 Ji matrices in the j = 1 representation.
To obtain representations of SU (2) we simply exponentiate these matrices as before.
Which of these representations are also representations of SO(3)? Well these will be
the representations for which the centre of SU (2) is mapped to the identity. Since the
non-trivial part of the centre corresponds to || = 2 we require, for example, that
e2iJ3 = I

(4.139)

This will be the case if the J3 eigenvalues are all integers and this in turn means that
l Z.
The l = 1/2, 1 representations are easy to visualize. They are known as the spinor
(or sometimes fundamental) and vector representations respectively. Although one may
ask which representation of SU (2) corresponds to l = 3. The hint is that that l = 3
is also the dimension of su(2). Any Lie algebra always admits the so-called adjoint
representation where the lie algebra acts on itself. Indeed this is the Lie algebra version
of conjugation in the group:
g hgh1

g g + i[T, g]

(4.140)

if h = 1 + T . Thus in a Lie algebra we always have the adjoint representation:


adT (X) = i[T, X] .

(4.141)

The Jacobi identify ensures that this is indeed a representation as


adi[T1 ,T2 ] X = [[T1 , T2 ], X]
= [[T2 , X], T1 ] + [[X, T1 ], T2 ]
= [T1 , [T2 , X]] + [T2 , [T1 , X]]
= adT1 (adT2 (X)) adT2 (adT1 (X))

(4.142)

The dimension of this representation is therefore the dimension of the Lie-algebra and
hence, for su(2) corresponds to l = 3. Here it is also apparent why the centre of SU (2)
acts trivially and hence also leads to a representation of SO(3).
More general representation arise by considering tensors T1 ,..,n over C2 for su(2)
or R3 for SO(3). The group elements act on each of the i indices in the natural
way. In general this does not give an irreducible representation. For larger algebras
such as SU (N ) and SO(N ) taking T1 ,...,n to be totally anti-symmetric does lead to
an irreducible representation. So does totally symmetric and traceless on any pair of
indices.

4.7. EVERYTHING YOU WANTED TO KNOW ABOUT SU (2) AND SO(3) BUT WERE AFRAID TO ASK

4.7.3

Representations Revisited

How does this work for more general Lie algebras. Let us re-do it using a slightly
different notation. su(2) consists of three generators which we now denote by H, E
and E that satisfy
[H, E ] = E ,

[H, E ] = E

(4.143)

Thus we should think of H as J3 and E as J . However it is also common to rescale

the generators so that = 2. In terms of Pauli matrices this means that we choose
1
Ji = i .
2

(4.144)

tr(Ji Jj ) = ij ,

(4.145)

This has the nice normalization that

but at the end of the day it is just another choice of basis and is equivalent to any other
choice. The corresponding J3 eigenvalues are no longer half-integer but rather of the

form n/ 2 with n Z and the representation is labelled by nh 2, where nh / 2 is the


largest J3 eigenvalue that appears. It is called the highest weight and the representation
is known as a highest-weight representation. One can also define a similar notion of
lowest weight and lowest-weight representation.
What happens in a general Lie algebra? These have rank r > 1 and hence one can
find r simultaneously diagonal matrices H1 , ..., Hr that commute with each other. We
assemble these into a vector H. The rest of the generators are split into positive and
negative root generators E and E which satisfy
[H, E ] = E ,

[H, E ] = E .

(4.146)

Here is an r-dimensional vector and is known as a root, each Lie algebra will have a
finite number of such roots. Furthermore it is possible to split the set of roots in a Liealgebra into positive and negative roots such that any root is either positive or negative.
This choice is somewhat arbitrary but different choices do not affect the answers in the
end. So for us is a positive root and is a negative root.
Furthermore the space of positive roots can be spanned by a basis of r so-called
simple roots. This means that all positive roots can be written as
= n1 1 + . . . + nr r ,

(4.147)

with ni non-negative integers.


Let us mention some definitions and a theorem you may have heard of: The Cartan
matrix is
i j
Kij = 2
.
(4.148)
i i
A Lie algebra is called simply laced if all simple roots have the same length and usually
one takes = 2. For the record the A, D, E series of Lie-algebras are simply laced
whereas the B, C, F, G series are not.

92

CHAPTER 4. GROUP THEORY

Theorem (not proven here): The set of all Lie-algebras is completely determined
and classified by the Cartan matrix.
Let us now look at representations. States in a representation are now labelled by a
vector w known as a weight:
(4.149)
H|wi = w|wi .
The positive root generators play the role of raising the weight
E |wi = c |w + i ,

(4.150)

whereas the negative root generators lower the weight


E |wi = c |w i .

(4.151)

You might wonder what is meant by an ordering of weights which are vectors in a higherdimensional space. By defining a notion of positive root one can then say that for two
weights that appear in a representation, w1 > w2 iff w1 w2 is a positive root. And
similarly w1 < w2 if their difference is a negative root. In general the space of possible
weights is infinite and forms a lattice, although of course in any given finite-dimensional
representation only a finite number of weights appear.
One then has two theorems for unitary finite dimensional representations (not proven
here). The first is:
Theorem: The set of possible weights is dual to the set of roots in the sense that
w Z .

(4.152)

This motivates two definitions: The fundamental weights w1 , ..., wr satisfy


i wj = ij .

(4.153)

where i are the simple roots. A weight w is called dominant iff


w = ni w 1 + . . . + nr w r .

(4.154)

with ni non-negative integers.


And we now have the second theorem:
Theorem: The set finite-dimensional irreducible representations is in one-to-one
correspondence with the set of dominant weights. In particular the highest weight
of a given representation is a dominant weight and every dominant weight defines an
irreducible representation with itself as the highest weight.
It follows that the highest weight state is anhilated by the action of all positive root
generators. One then obtains the remaining states by acting with the negative root
generators. This is a well-defined process that by the above theorem always ends after
a finite number of states.

Returning to su(2) the simple and only root is 2 and so the fundamental weight is

1/ 2. The dominant weights are just n/ 2 with n = 1, 2, .... Each of these defines a
irreducible representation with states:

|n/ 2, n/ 2i, |n/ 2, n/ 2 2i, , . . . |n/ 2, n/ 2i


(4.155)

since now the negative root generator E lowers the H eigenvalue by 2.

4.8. THE INVARIANCE OF PHYSICAL LAW

4.8

93

The Invariance of Physical Law

Let us now see how group theory arises in physical laws. At least in two fundamental notions: translational invariance and relativity. There are many other important examples
of groups and symmetries in physics, the Standard Model is built on various symmetry
principles. But let us just focus on these which in effect determine the structure of
spacetime.

4.8.1

Translations

We have seen that there is a natural operator for momentum an energy in quantum
mechanics:

=i
E
(4.156)
pi = i i
x
t
As luck would have it these form a nice relativistic 4-vector:
p = i

(4.157)

where t = x0 and c = 1. As such these operators form a infinite dimensional representation of an abelian algebra:
[
p , p ] = 0 .

(4.158)

As an algebra this is not so interesting but clearly it plays an important role in physics.
We have dropped the ~, or more precisely taken ~ = 1 because these operators also
appears as the generator of translations even in a classical field theory. To see this
consider an infinitesimal shift x x +  . Any function, not just a wavefunction, will
then change according to
(x  ) =  + . . .
= + i p + . . .

(4.159)

The finite group action is then obtained by exponentiation:


e

ia p

X
1
(ia p )n
(x) =
n!
n=0

n

X
1
n
=
a1 a2 ...an 1

n!
x ...xn
n=0

= (x a ) ,

(4.160)

where the last line is simply Taylors theorem.


It follows that any Physical laws that are written down in terms of fields of x will
have translational invariance provided that no specific potentials or other fixed functions
arise.

4.8.2

Special Relativity and the Infinitesimal Generators of SO(1, 3).

In addition to translations in space and time Special relativity demands that the physical
laws are invariant under Lorentz transformations.

94

CHAPTER 4. GROUP THEORY


Recall that the Lorentz group O(1, 3) is defined by
O(1, 3) { GL(4, R)|T = ; diag(1, 1, 1, 1)}

In addition to rotations (in the three-dimensional spatial subspace parameterised by


{x, y, z} which are generated by L1 , L2 and L3 in the notation of the previous section)
and reflections (t t, x x, y y, z z) the Lorentz group includes three
Lorentz boosts. The proper Lorentz group consists of such that Det() = 1 and is
the group SO(1, 3). The orthochoronous Lorentz group is the subgroup which preserves
the direction of time, having 0 0 1. The orthochronous proper Lorentz group is
sometimes denoted SO+ (1, 3). The proper Lorentz group SO(1, 3) consists of just the
rotations and boosts. The Lorentz boosts are the rotations which rotate each of x, y
and z into the time direction and are represented by the generalisation of the matrix
shown in equation (2.30):

cosh sinh 0 0
cosh 0 sinh 0

sinh cosh 0 0

0
1
0
0
, 2 () =
and
1 () =

0
0
1 0

sinh 0 cosh 0
0
0
0 1
0
0
0
1

cosh 0 0 sinh

0
1 0
0

.
3 () =
(4.161)

0
0 1
0

sinh 0 0 cosh
We identify

i
Y1 =
0

a basis for the Lorentz boosts in the

i 0 0
0 0 i

0 0 0
0 0 0
, Y2 =

0 0 0
i 0 0
0 0 0
0 0 0

Lie algebra so(1, 3):

0
0

0
0
and Y3 =

0
0
0
i

0 i

0 0
.
0 0

0 0
(4.162)
The remainder of the Lie algebra of the proper Lorentz group is made up of the generators of rotations:

0 0 0 0
0 0 0 0
0 0 0 0

0 0 i 0
0 0 0 0
0 0 0 i

L1 =
, L2 = 0 0 0 0 and L3 = 0 i 0 0 .
0
0
0
i

0 i 0 0
0 0 0 0
0 0 i 0
(4.163)
Computation of the commutators gives (after some time...)
[Li , Lj ] = iijk Lk ,

[Li , Yj ] = iijk Yk

and

[Yi , Yj ] = iijk Lk .

0
0
0
0

(4.164)

It is worth observing that the generators for the rotations are skew-symmetric matrices
LTi = Li while the boost generators are symmetric matrices YiT = Yi for i {1, 2, 3}.
This is a consequence of the rotations being an example of a compact transformation
(all the components of the matrix representation of the rotation (cos , sin ) in the
group are bounded) while the Lorentz boosts are non-compact transformations (some

4.8. THE INVARIANCE OF PHYSICAL LAW

95

of the components of the matrix representation of the boosts (cosh , sinh ) in the
group are unbounded - they may go to .)
Notice that if one uses the combinations
1
Wi (Li iYi )
2

(4.165)

as a basis of the Lie algebra then the commutator relations simplify:


[Wi+ , Wj+ ] = iijk Wk+

su(2)

[Wi , Wj ] = iijk Wk

su(2)

(4.166)

[Wi+ , Wj ] = 0.
Via a change of basis for the Lie algebra we recognise that it encodes two copies of the
algebra su(2):
so(1, 3)
(4.167)
= su(2) su(2).

4.8.3

The Proper Lorentz Group and SL(2, C).

We will now show that so(1, 3)


= sl(1, C) as Lie algebras and that in terms of groups
+

SO (1, 3) = SL(2, C)/Z2 , where Z2 is the centre of SL(2, C). Furthermore SL(2, C) is
the double cover (universal cover) of SO(1, 3) known as Spin(1, 3).
Let us recall the Pauli matrices and introduce the identity matrix as 0 :
!
!
!
!
1 0
0 1
0 i
1 0
, 1 =
, 2 =
, 3 =
. (4.168)
0 =
0 1
1 0
i 0
0 1
Consider for each Lorentz vector x R1,3 the map two-by-two matrix given by
!
0 + x3
1 ix2
x
x
X x =
(4.169)
x1 + ix2 x0 x3
One easily sees that X = X spans all 2 2 Hermitian matrices. One may confirm that
matrices A GL(2, C) transforming X X 0 by the action
X X 0 AXA

(4.170)

Det(X) = (x0 )2 (x3 )2 (x1 )2 (x2 )2 = x x .

(4.171)

preserve X = X.
Furthermore one has

Consequently the transformations on X which leave its determinant unaltered are Lorentz
transformations. What are these? Well Det(X 0 ) = Det(AXA ) = Det(XA A) =
Det(X)Det(A A). Thus we require as Det(A A) = |Det(A)|2 = 1. If we write
A = ei/2 A0

(4.172)

with A0 SL(2, C), i.e. Det(A0 ) = 1. Then Det(A) = ei and A = ei/2 A0 . The
factors of ei cancel in the action X AXA so that without loss of generality we
simply take A SL(2, C).

96

CHAPTER 4. GROUP THEORY

Hence each A SL(2, C) encodes a proper Lorentz transformation on x . However


it is also clear that if A SL(2, C) then A SL(2, C). However both lead to the same
action on X. So at best we have SO(1, 3)
= SL(2, C)/Z2 but actually there is more.
0
Next we note that the sign of x is never changed. To see this is it sufficient to have
only x0 6= 0 so that X = x0 I. Consider the matrix

1 0 0 0

0 1 0 0

SO(1, 3)
(4.173)

0
0
1
0

0
0 0 1
which will change the sign of x0 (and x1 but have set x1 = 0 for this). In the SL(2, C)
action above one has
X 0 = x0 AA .
(4.174)
To change the sign of x0 we require an A SL(2, C) with AA = I. But this is
impossible since AA is Hermitian and positive definite whereas I is Hermitian and
negative definite. Thus SO+ (1, 3)
= SL(2, C)/Z2 .
To discover the precise transformation one considers the components of x which are
simply related to X. By direct computation we can check that
i j = ij 0 + iijk k
0 = 0 =

(4.175)

and

X = x = x0 + xi i =

x0 0 + xi i

x0 j + xi i j

=0

x0 + ixi 
j
ijk k
=j
x0 + xi
j
ij 0

j 6= i
j=i

As T r(0 ) = 2 while T r(i ) = 0 we have


T r(X ) = 2x

1
x = T r(X )
2

(4.176)

and we have used the Minkowski metric to lower indices where necessary. We leave the
exercise of finding the proper Lorentz transformation corresponding to each matrix of
SL(2, C) to the following problem.
Problem 4.8.1. Let X = x and show that the Lorentz transformation x0 = x
induced by X 0 = AXA has:
1
(A) = T r(A A )
2
thus defining a map A (A) from SL(2, C) into SO(1, 3). Where 0 is the two-by-two
identity matrix and i are the Pauli matrices as defined in question 4.2. (Method: show
first that T r(X ) = 2x , then find the expression for the Lorentz transform of x x0
associated to X X 0 . Finally set x to be the 4-vector with all components equal to zero
apart from the x component which is equal to one.)
By considering a further transformation X 00 = BX 0 B show that:
(BA) = (B)(A)

4.8. THE INVARIANCE OF PHYSICAL LAW

97

so that the mapping is a group homomorphism. Identify the kernel of the homomorphism
as the centre of SU (2) i.e. A = I, thus showing that the map is two-to-one.
Thus SL(2, C) can be view as the double cover of SO+ (1, 3) and plays an analogous
role that SU (2) plays with respect to SO(3). In particular representations of SL(2, C)
are labeled by a pair of su(2) representations with highest weights l1 and l2 respectively.
Representations with integer values of l1 + l2 descend to representations of SO(1, 3) but
the ones where l1 + l2 is half-integer do not. In particular the spin-statistics theorem
states that the former correspond to bosons whereas the later correspond to fermions.
Although we havent shown it here SU (2) and SL(2, C) are simply connected, meaning that any closed loop in them can be continuously contracted to a point. The
groups SO(3) and SO+ (1, 3) are not simply connected. SU (2) and SL(2, C) are known
as universal covering spaces. This is a general pattern and the universal covering
spaces of SO(d) and SO+ (1, d) are known as Spin(d) and Spin(1, d) respectively i.e.
Spin(3) = SU (2) and Spin(1, 3) = SL(2, C). These groups act on spinors and their
tensor products whereas SO(d) and SO+ (1, d) act on vectors and their tensor products.
Note that the tensor product of two spinors gives a vector. Again the spin-statistics
theorem states that in quantum field theory spinors must be fermions.
Finally we can marry translations and Lorentz transformations to obtain the bf
Poincare Group. The Poincare group is the group of isometries of Minkowski spacetime.
It includes the translations in Minkowski space in addition to the Lorentz transformations:
{(, a)| O(1, 3), a R1,3 }
(4.177)
a general transformation of the Poincae group takes the form
x0 = x + a .

(4.178)

It is known as a semi-direct product of translations and Lorentz transformations. Semidirect product means the actions of translations and Lorentz transformations do not
simply commute with each other as they do in a direct product.

4.8.4

Representations of the Lorentz Group and Lorentz Tensors.

The most simple representations of the Lorentz group are scalars. Scalar objects being
devoid of free Lorentz indices form trivial representation of the Lorentz group (objects
which are invariant under the Lorentz transformations). The standard vector representation of the Lorentz group on R1,3 acts as
x x0 = x .

(4.179)

This is the familiar vector action of on x and we shall denote it by (1,0) .


Similarly one may define the contragredient, or co-vector, representation (0,1) acting
on co-vectors as
x x0 = x .
(4.180)
Problem 4.8.2. Show that (1,0) and (0,1) are equivalent representations with the
intertwining map being the Minkowski metric .

98

CHAPTER 4. GROUP THEORY

More general tensor representations are constructed from tensor products of the
vector and co-vector representations of the Lorentz group and are called (r, s)-tensors:
(1,0)
(0,1)
(1,0){z . . . (1,0)}
(0,1){z . . . (0,1)}
|
|
r

(4.181)

(r, s)-tensors have components with r vector indices and s co-vector indices
T 1 2 ...r 1 2 ...s
and under a Lorentz transformation the components transform as
T 1 2 ...r 1 2 ...s 1 1 2 2 . . . r r 1 1 2 2 . . . r r T 1 2 ...r 1 2 ...s . (4.182)
There are two natural operations on the tensors that map them to other tensors:
(1.) One may act with the metric to raise and lower indices (raising an index maps
an (r, s) tensor to an (r + 1, s 1) tensor while lowering an index maps an (r, s)
tensor to an (r 1, s + 1) tensor):
1 2 ...k1 k+1 ...r

1 2 ...s
1 2 ...r

T
1 2 ...k1 k+1 ...s

k T 1 2 ...r1 2 ...s = T
k T 1 2 ...r1 2 ...s =

(4.183)

(2.) One can contract a pair of indices on an (r, s) tensor to obtain an (r 1, s 1)


tensor:
T 1 2 ...r1 1 2 ...s1 = T 1 2 ...r11 2 ...s1 .
(4.184)
One may be interested in special subsets of tensors whose indices (or even a subset of
indices) are symmetrised or antisymmetrised. Given a tensor one can always symmetrise
or antisymmetrise a set of its indices:
A symmetric set of indices is denoted explicitly by a set of ordinary brackets ( )
surrounding the symmetrised indices, e.g. a symmetric (r, 0) tensor is denoted
( ... )
T 1 2 r and is constructed from the tensor T 1 2 ...r using elements P of the
permutation group Sr :
T

(1 2 ...r )

1 X P (1) P (2) ...P (r)


T
r!

(4.185)

P Sr

so that under an interchange of neighbouring indices the tensor is unaltered, e.g.


T

(1 2 ...r )

=T

(2 1 ...r )

(4.186)

One may wish to symmetrise only a subset of indices, for example symmetrising
( | ...
| )
only the first and last indices on the (r, 0) tensor is denoted by T 1 2 r1 r
and defined by
T

(1 |2 ...r1 |r )

1 X P (1) 2 ...r1 P (r)


T
2!

(4.187)

P S2

the pair of vertical lines indicates the set of indices omitted from the symmetrisation.

4.8. THE INVARIANCE OF PHYSICAL LAW

99

An antisymmetric set of indices is denoted explicitly by a set of square brackets


[ ] surrounding the antisymmetrised indices, e.g. an antisymmetric (r, 0) tensor is
[ ... )]
denoted T 1 2 r and is constructed from the tensor T 1 2 ...r using elements P
of the permutation group Sr :
T

[1 2 ...r ]

1 X

...
Sign(P )T P (1) P (2) P (r)
r!

(4.188)

P Sr

so that under an interchange of neighbouring indices the tensor picks up a minus


sign e.g.
[ ... ]
[ ... ]
T 1 2 r = T 2 1 r .
(4.189)
Frequently in theoretical physics the symmetry or antisymmetry of the indices on a
tensor will be assumed and not written explicitly (which can cause confusion). For
example we might define g to be a symmetric tensor which means that g[] = 0
while g() = g . Similarly for the Maxwell field strength F which was defined to be
antisymmetric hence F[] = F while F() = 0.
We stated earlier that the tensor product of two irreducible representations is typically not irreducible. We can see that explicitly here for the case of a generic tensor
T which transforms in the tensor product of two vector representations. let us write
T = T () + T []

(4.190)

where
1
T () (T + T )
2
1
[]
T
(T T )
2

T () = T ()

(4.191)

T [] = T [] .

First let us show that T () and T [] form separate representations, meaning that under
a Lorentz transformation T () remains symmetric while T [] remains anti-symmetric.
First consider the Lorentz transformation of T ()
1
1
T 0() = T + T
2
2

= (T + T )
2

= T () .

(4.192)

Thus after a Lorentz transformation the symmetric part remains symmetric. A similar
argument shows that the anti-symmetric part remains anti-symmetric after a Lorentz
transformation (you just replace the + by a ). Thus the representation is reducible:
the subspaces of symmetric or anti-symmetric tensors are invariant subspaces.
But there is a further reduction. The symmetric part can be written as
T () = T + T() ,

(4.193)

T() = 0 .

(4.194)

where T() is traceless:

100

CHAPTER 4. GROUP THEORY

Thus
T () = (T + T() + T [] )
= (1 + d)T

(4.195)

and
T() = T + T ()
1
= T ()
T ()
1+d

(4.196)

where we have assumed that spacetime has dimension 1 + d. By construction T is


Lorentz invariant and therefore gives a separate, albeit trivial, Lorentz representation.
Thus even a symmetric tensor gives a reducible representation with pure-trace tensors,
i.e. those of the form T = T an invariant subspace. Finally we see that at traceless
symmetric tensor remains so after a Lorentz transformation:
()
T0() = (d + 1)T 0 + T 0

= (d + 1)T + T ()
= (d + 1)T + T ()
= (d + 1)T + (d + 1)T
=0.

(4.197)

Therefore we see that a tensor T splits into an anti-symmetric, symmetric traceless


and pure trace pieces, each of which is a representation of the Lorentz group.
Problem 4.8.3. Consider the space of rank (3, 0)-tensors T 1 2 3 forming a tensor
representation of the Lorentz group SO(1, 3) which transforms under the Lorentz transformation as
T 01 2 3 = 1 1 2 2 3 3 T 1 2 3 .
(a.) Prove that
T 2 T1 2 3 T 1 2 3
is a Lorentz invariant. The Einstein summation convention for repeated indices is
assumed in the expression for T 2 .
(b.) Give the definitions of the symmetric (3, 0)-tensors and of antisymmetric (3, 0)tensors and show that they form two invariant subspaces under the Lorentz transformations.
(c.) Prove that the symmetric (3, 0)-tensors form a reducible representation of the
Lorentz group.

Potrebbero piacerti anche