Sei sulla pagina 1di 34

Chapter 1

Mathematical Language of
Quantum Mechanics

All the mathematical sciences are founded on relations between


physical laws and laws of numbers, so the aim of exact science is
to reduce the problems of nature to the determination of quantities
by operations with numbers.

James Clerk Maxwell

1.1 Introduction
A major new development in physics usually necessitates a corresponding de-
velopment in mathematics. For example, differential and integral calculus were
developed for classical mechanics to provide precise definitions for notions such
as velocity and acceleration. Similarly, quantum mechanics has its own mathe-
matical language, which was created for the specific requirements of the theory,
and its development went hand-in-hand with the development of the physical
theory. The mathematics of quantum mechanics involves vectors, linear scalar
product spaces, linear operators and associative algebras.
The mathematics that will be discussed in this chapter will probably be con-
sidered abstract as compared with differential operators or matrices. However,
this is actually not the case. The one is as real as the other or, more precisely, as
abstract as the other. A mathematical structure is created by humans and only
exists in our minds. It is obtained by taking a set of objects and equipping this
set with a structure by defining relations among these objects. Only familiarity
makes some aspects of mathematics seem more real than others.
The mathematical language of quantum mechanics was created so that quan-
tum mechanics could be expressed in its general form. In 1926 P. Jordan and
F. London started from the classical canonical transformations and recognized

1
2 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

that these were coordinate transformations of a linear space. Physical quan-


tities, such as the intensity of radiation as an electron in an atom drops to a
lower state, were found to be represented by matrices. These matrices turned
out to be matrices of operators in this linear space. First ,Jordan and London
considered only matrices and basis systems with discrete indices. The extension
of the transformation theory to objects with continuous indices was done by Jor-
dan and in particular by P.A.M. Dirac (1926-27). Diracs formalism was simple
and beautiful but did not satisfy the requirement of mathematical rigor. The
first rigorous mathematical formulation was given by D. Hilbert, L. Nordheim,
and in particular by John von Neumann (1927) who associated the notions of
quantum mechanical states and observables with vectors and operators, respec-
tively, in the Hilbert space. Von Neumanns Hilbert space formulation could
not accommodate objects with continuous indices and continuous eigenvalues.
The mathematically rigorous formulation of quantum mechanics that includes
the Dirac formalism, upon which our presentation here is based, was only pos-
sible after L. Schwartz (1950) had developed his distribution theory and I. M.
Gelfand and collaborators (1960) had introduced the rigged Hilbert space.
In this chapter the mathematical tools required for quantum mechanics are
presented without giving mathematical proofs or insisting on mathematical
rigor. The purpose of the chapter is to provide physicists with the rules for
manipulating the mathematical quantities that represent the physical structure
of quantum mechanics.
1.2. LINEAR, SCALAR-PRODUCT SPACES 3

1.2 Linear, Scalar-Product Spaces

Linear spaces and linear operators are a generalization of certain aspects of


three-dimensional space. The usual three-dimensional space consists of vectors
that can be multiplied by real numbers and acted on by transformations or
tensors. Mathematical objects such as vectors in three-dimensional space obey
certain rules. To formulate the rules for a general, linear space, the rules from
three-dimensional space are taken as the defining relations for a set of mathe-
matical objects.

The linear spaces that are needed for quantum theory are, in general, not
three-dimensional. They can have any dimension N , often infinite; the numbers
are not real, but usually complex; the transformations are not orthogonal, but
unitary; and the second rank tensors are not finite, but operators that can be
represented by infinite matrices. In what follows, the rules for linear spaces are
formulated in analogy with the usual rules for three-dimensional space.

PROPERTIES OF THE THREE- DEFINING RELATIONS FOR THE


DIMENSIONAL SPACE GENERAL LINEAR SPACE

<3

Under addition, two vectors a, b <3 The addition of two elements ,


( i.e. a, b in the space <3 ) satisfy is defined to satisfy

a + b = b + a. (2.1a) + = + . (2.1b)

Addition is associative. Addition is defined to be associative.

(a + b) + c = a + (b + c). (2.2a) ( + ) + = + ( + ). (2.2b)

There exists a zero vector with the There exists an element 0 with
property the property
0 + a = a. (2.3a)
0+ = . (2.3b)

A vector can be multiplied with a real If and b C


/ (b is a complex
number b. number), then

b(a) = ba <3 (2.4a) b() = b . (2.4b)


4 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

Multiplication of a vector by real num- Multiplication of a vector by complex


bers a and b has the following proper- numbers a and b has by definition the
ties: following properties:
a(ba) = (ab)a (2.5a)
a(b) = (ab) (2.5b)

1a = a (2.6a) 1 = (2.6b)
0a = 0 (2.7a) 0 = 0 (2.7b)
On the left 0 is the number zero and on the right 0 is the element 0 of (2.3b).

Multiplication by real numbers satisfy Multiplication by complex numbers


satisfy
b(a + b) = ba + bb (2.8a) b( + ) = b + b (2.8b)

(a + b)a = aa + ba . (2.9a) (a + b) = a + b . (2.9b)

The negative of a vector is defined by The negative of a vector is defined by

(1)a = a . (2.10a) 1 = . (2.10b)

Since a, b and c are called vectors, the elements , are also called
vectors. The set of mathematical objects , , etc. that obey rules or axioms
(2.1b) - (2.10b) is called a linear space; therefore, a linear space is defined
by these rules alone. There are, of course, linear spaces whose objects have
more properties than those stated above, but those additional properties are
not necessary for them to be elements of a linear space.
One realization of an N -dimensional linear space would be by N -dimensional
column matrices whose entries are complex. Another realization of a linear space
is provided by complex, continuous, rapidly decreasing functions for which the
functions themselves as well as all derivatives are square integrable. Because
some people have spent more time studying functions as opposed to, say, ma-
trices, one person may be more comfortable with one realization than another.
Ultimately it is important to free oneself from all realizations and consider the
linear space simply as a set of thought objects defined by (2.1b) - (2.10b). In
physics the vectors of the linear space are realized by pure physical states. That
is, these thought objects are used as mathematical images of physical states.
A linear space does not have enough structure to be of much use. To equip it
with more structure, a scalar product is defined. Linear spaces with scalar prod-
ucts are called linear, scalar-product spaces, Euclidean spaces, or Pre-Hilbert
spaces. In the usual three-dimensional space <3 , the scalar product of the vec-
tors a and b is denoted by a b and can be calculated using the formula

a b = ax bx + ay by + az bz . (2.11)
1.2. LINEAR, SCALAR-PRODUCT SPACES 5

In a general linear space, the scalar product of two vectors and will be
denoted by either (, ) or (|). Since there are some features of a scalar-
product space that are not present in <3 , in addition to <3 , a realization of a
scalar-product space by a space of well behaved functions will be used. Here,
well behaved means that all operations that are performed with the functions
are well-defined. For such functions f (x) and g(x), their scalar product is defined
in Chapter II [equation (II.4.27) and problem 18] by

Z
(f, g) f (x)g(x)dx . (2.12)

Note that in contrast to (2.11), the scalar product (2.12) is, in general, complex.
In formulating the rules for scalar-product spaces, the rule will first be exam-
ined in <3 and then the corresponding rule will be considered for the complex,
continuous functions just mentioned with a scalar product defined in (2.12).
Finally, a general rule will be formulated. The scalar products is required to
have the following properties:

THREE- SPACE OF COMPLEX, GENERAL


DIMENSIONAL CONTINUOUS FUNCTIONS SCALAR-
SPACE <3 THAT, ALONG WITH PRODUCT SPACE
ALL DERIVATIVES, ARE
SQUARE INTEGRABLE

The scalar product of The scalar product of a function In a linear space ,


a vector with itself is with itself is positive definite, the function (, )
positive definite, C
/ is defined to have
the following proper-
ties:
Z
a a 0. (2.13a) f (x)f (x)dx 0. (, ) 0. (2.13c)

Z (2.13b)
f (x)f (x)dx = 0
For any
a a = 0 iff (if and
only if) a = 0. iff f(x)=0. (, ) = 0 iff = 0.
6 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

Since the scalar prod- The scalar product satisfies Any two vectors
uct is real, it trivially Z , must
satisfies (f, g) = f (x)g(x)dx satisfy

Z
= g (x)f (x)dx

a b = (b a) . = (g, f ) . (2.14b) (, ) = (, ) .
(2.14a) (2.14c)

Multiplication by a Multiplication by a complex For any ,


real scalar a satisfies scalar satisfies and any a C /,
a(a b) a(f, g) a(, )
= a (ab) Z = (, a)
= f (x)[ag(x)]dx = (f, ag)
Z
= (aa) b . (2.15a) = [a f (x)] g(x)dx = (a f, g) = (a , ) . (2.15c)

(2.15b)

Note that while the convention (2.15) for scalar products is standard in physics,
it is not standard in the mathematical literature where one often finds a(, ) =
(, a ) = (a, ). That is, the scalar product in mathematical literature is
often defined as the complex conjugate of the definition that is standard in the
physics literature.

The scalar product of The scalar product of a sum For any , ,


a sum of vectors is satisfies satisfies
the sum of the scalar
products.
(a + b) c (f + g, h) ( + , )
Z
= a c + b c (2.16a) = (x) + g(x)] h(x)dx
Z[f = (, ) + (, )
(2.16c)
= f (x)h(x)dx
Z
+ g (x)h(x)dx . (2.16b)

1.2. LINEAR, SCALAR-PRODUCT SPACES 7

The length or norm of The norm is defined by (, ) is called the


a vector is scalar product of the
vectors and in
the linear space ,
and the space is
called a linear, scalar-
product space or Eu-
clidean space. In a
scalar-product space
the norm is defined
by the scalar product

1
Z 1
||a|| (a a) 2 . ||f || [ f (x)f (x)dx] 2 .
1
|||| (, ) 2 .
(2.17a) (2.17c)
(2.17b)

A vector is said to be
normalized if |||| =
1.
Two vectors a and b Two functions f (x) and g(x) Two vectors and
are orthogonal if are said to be orthogonal if are defined to be or-
thogonal if

Z
a b = 0 . (2.18a) f (x)g(x)dx = 0 . (, ) = 0 . (2.18c)

(2.18b)

Example 2.1

Consider the following two functions on the interval < x < :


2 2
f1 (x) = A1 ex /2
f2 (x) = A2 xex /2

Determine the constants |A1 | and |A2 | such that f1 (x) and f2 (x)are normalized.
Are f1 (x) and f2 (x) orthogonal?
Solution: To normalize fi (x), we require
Z
1 = (fi , fi ) = fi (x)fi (x)dx .

For f1 (x), the above integral becomes


Z
2
1 = |A1 |2 ex dx = |A1 |2 ,

8 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

where the integral was evaluated with the aid of a table of integrals. Thus,
1
|A1 | = 4 .

Similarly, Z

2
1 = |A2 |2 x2 ex dx = ,
2
or 1
|A2 | = 2 4 .
To determine if f1 (x) and f2 (x) are orthogonal, their scalar product is calculated:
Z
(f1 , f2 ) = f1 (x)f2 (x)dx

Z
2
= A1 A2 xex dx

=0
.

The above integral is zero from symmetry: By changing to the integration


variable to y = x, the integral is found to equal the negative of itself and is,
therefore, zero. Since (f1 , f2 ) = 0, the functions are orthogonal.

Example 2.2

Using (2.14c), show that the relation a(, ) = (, a ) implies the relation
a(, ) = (a , ) for a .
Solution: Taking the complex number to be a instead of a, the first equality
in (2.15c) immediately yields

a (, ) = (, a ).

Taking the complex conjugate of both sides of the above equation

a(, ) = (, a ) .

Using (2.14c) immediately yields the desired equality.


1.3. LINEAR OPERATORS 9

1.3 Linear Operators

The rules or axioms for linear operators in linear spaces are formulated here in
analogy with operators in three-dimensional space.

THREE-DIMENSIONAL SPACE <3 GENERAL SCALAR-PRODUCT


SPACE
Vectors in <3 can be transformed into In a linear, scalar-product space trans-
other vectors. One example is the ro- formations or linear operators are de-
tation R, which rotates a vector a into fined as follows: A function or oper-
a new vector b = Ra. There are also ator A that maps each vector
other transformations such as the mo- into a vector ,
ment of inertia tensor I that trans-
forms one vector into another accord-
= A()
A,
ing to j = I. These transformations
have the following properties: is called a linear operator if it obeys
the rules (3.1b) and (3.2b) listed be-
low. (In this text linear operators will
always be denoted by a circumflex ac-
cent.) Thus, by definition, linear op-
erators have the following properties:

R(a + b) = Ra + Rb, (3.1a) + ) = A


A( + A,
(3.1b)
R(ab) = a(Rb), (3.2a)
A(a) = a(A), (3.2b)

where a is a real number. where a C


/.

Transformations (tensors) in <3 can be The following corresponding relations


added, multiplied by a real number, are valid:
and multiplied by each other.

(R1 + R2 )a = R1 a + R2 a (3.3a) (A + B)
= A
+ B,
(3.3b)

(aR1 )a = a(R1 a) (3.4a) = a(A),


(aA) (3.4b)

R1 R2 a = R1 (R2 a) (3.5a) (AB)


= A(
B).
(3.5b)
10 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

Tensors in three dimensions can be It can be shown that if A and B are


diagonalized by transforming to their
linear operators, then A + B, aA, and
main axis AB are also linear operators.
Operators of special interest are the
|a = |(a) a zero operator 0 and the unit or identity
operator 1 defined, respectively, by
where |(a) is the eigenvalue and a is the
eigenvector. 0 = 0, 0 ;
1 = ;

for every . If there exists a


non-zero vector such that
= , C
A /,

then is called an eigenvector of A



and is called an eigenvalue of A.
For every operator R defined for all
For every linear operator A defined for
vectors, the transpose operator RT has all , the operator A is defined
the following property: Writing the by
scalar product in components
X 3
a (Rb) = 3j=1 ai Rij bj
i=1
3
X
= 3j=1 ai Rji
T
bj
i=1
3
X
= 3j=1 (Rji
T
ai )bj
i=1
= (RT a) b (3.6a) = (A , ).
(, A) (3.6b)
The operator A is called the adjoint
operator of A. An operator for which
A = A is called self-adjoint or Hermi-
tian1 .
Example 3.1

Find the adjoint of the operator R = a d, a C /, for the space of complex,


dx
continuous functions which, along with all derivatives, are square integrable.
Solution: Integrating by parts,
Z
dg(x)
(f, Rg) = f (x) a dx,
dx
Z
df (x)

= af (x)g(x) a g(x)dx.
dx
1.3. LINEAR OPERATORS 11

Since f( ) = g( ) = 0, the surface term vanishes. Therefore,


Z
df (x)
(f, Rg) = a g(x)dx.
dx

= a d .
From the definition of the adjoint operator, R dx

Example 3.2
) is real if A = A.
Show that the scalar product (A,
Solution: Using (2.14c),
) = (, A).
(A,

But from the definition (3.6b) for the adjoint of an operator,


= (A , ).
(, A)

Combining the above two equations,


) = (A , ).
(A,

Thus if A = A, the complex conjugate of the scalar product (A,


) equals
itself and is therefore real.

an immediate consequence of Exam-


If is an eigenvector of the operator A,
ple 3.2 is that all eigenvalues of a Hermitian operator are real. For this reason,
Hermitian operators are often called real operators.

Example 3.3

Show that any two eigenvectors , of a Hermitian operator A satisfying


= a1 ,
A = a2 ;
A a1 6= a2 ,

have the property


(, ) = 0.

Solution: Since is an eigenstate of A,
= (, a2 ) = a2 (, ).
( A)

Using the Hermiticity of the operator A (as given in (3.6b)


= (A,
(, A) ) = (a1 , ) = a1 (, ).

Remembering that the eigenvalues of a Hermitian operator are real, subtracting


) yields
the above two expressions for (, A

0 = (a2 a1 )(, ).
12 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

Thus if a1 6= a2 , (, ) = 0.

For any two vectors , and an operator A, the scalar product of



A with , namely (|A), plays an important role in physics and is called the
matrix element of the operator A between the vectors and . As will be
shown in the following chapter, the apparently different Schrodinger picture of
Chapter II and atomic mechanics of Chapter III are identical in content. All
observables such as position, momentum, and energy are represented by linear
operators in a linear, scalar-product space, and states are represented by vectors
in this same space. The pictures appear to be different only because different
matrix elements of the operators have been taken.

Example 3.4

Calculate the four matrix elements (fi (x), xfj (x)) where f1 (x) and f2 (x), re-
spectively, are the normalized functions
1 2 1 2
f1 (x) = 4 ex /2 and f2 (x) = 2 4 xex /2

first discussed in Example 2.1.


Solution: From the definition of a matrix element,
Z
(fi (x), xfj (x)) = fi (x)xfj (x) dx.

Since f1 (x) and f2 (x) are real,


r Z
2 2 1
(f1 (x), xf2 (x)) = (f2 (x), xf1 (x)) = ex x2 dx = .
2
Using the procedure mentioned in Example 2.1, the diagonal matrix elements
are found to equal zero,

(f1 (x), xf1 (x)) = (f2 (x), xf2 (x)) = 0.

The results can be summarized by the single matrix


!
0 1
(fi (x), xfj (x)) = 2 ,
1 0
2

where the first index labels the row and the second labels the column of each
matrix element.
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 13

1.4 Basis Systems and Eigenvector Decomposi-


tions
1.4.1 Discrete Basis Vectors in Real, Three-Dimensional
Space
Three basis vectors are introduced in the three-dimensional space <3 ,

ei , i = 1, 2, 3 , (4.1)

that span the space and are usually normalized to unity,

ei ei = 1. (4.2)

These basis vectors are also usually chosen so that they are orthogonal to one
another,
ei ej = 0 if i 6= j. (4.3)
Instead of labeling the vectors by i = 1,2,3, they could also be denoted by e1
= ex , e2 = ey , e3 = ez . Relations (4.2) and (4.3) can be written as the single
equation
0 for i 6= j
ei ej = ij = , i, j = 1, 2, 3 , (4.4)
1 for i = j
where ij is the Kronecker . The set of orthogonal, normalized vectors is called
an orthonormal set.
A basis system may be chosen arbitrarily provided it spans the space al-
though, for a specific physical problem, one basis system may be much easier to
work with than others. For example, for a rigid body with an inertia tensor I,
it is helpful to choose the basis system such that the inertia tensor is diagonal.
Therefore, the ei are chosen such that

ei I ej = I(j) ij or I ej = I(j) ej . (4.5)

The ej are the eigenvectors of the tensor I and the I(j) are the eigenvalues.
In <3 every vector x can be expanded in terms of the basis system,
3
X
x= xi ei = x1 e1 + x2 e2 + x3 e3 . (4.6)
i=1

The numbers xi are the coordinates or components of x with respect to the


ei . As shown in Fig. 4.1, the x1 , x2 , and x3 are, respectively, the x, y, and
z-components of the vector x.
Taking the scalar product of both sides of (4.5) with ej ,
3
X 3
X
ej x = ej ei xi = ij xi = xj , (4.7)
i=1 i=1
14 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

Figure 4.1: The three-dimensional vector x expressed in terms of the normalized


(unit) vectors e1 , e2 and e3 .

where the orthogonality relation (4.4) has been used. Clearly the xi determine
the vector x uniquely. Using the expression for xj in (4.6), (4.5) can be written
in the form
X3
x= ei (ei x) . (4.8)
i=1

Example 4.1

Consider the two-dimensional vector V shown in Fig. 4.2, which has a mag-
nitude |V|, makes an angle with the x -axis, and makes an angle 0 with the
e1 -axis. From geometrical considerations, express V in terms of the unit vectors
and y
x and then in terms of e1 and e2 . Show that the second formula agrees
with (4.7).
Solution: The x and y-components of V are clearly |V| cos and |V| sin ,
respectively. Therefore,

+ |V| sin y
V = |V| cos x .

Using similar logic, in terms of the orthonormal basis vectors e1 and e2 ,

V = |V| cos 0 e1 + |V| sin 0 e2 .

Now from the definition of the dot product,

e1 V = |e1 ||V| cos 0 = |V| cos 0 ,

and
e2 V = |e2 ||V| cos(90 0 ) = |V| sin 0 .
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 15

, y
Figure 4.2: The two-dimensional vector V shown with respect to the x basis
and the e1 , e2 basis

Combining the above three equations,

V = e1 (e1 V) + e2 (e2 V),

which, for two-dimensional vectors V, agrees with (4.7).

In <3 , the scalar product of the vectors


3
X 3
X
x= ei xi and y = ej yj
i=1 j=1

is calculated as follows:
3 X
X 3
xy = ei xi ej yj
i=1 j=1

Using (4.4),
3 X
X 3
xy = xi ij yj ,
i=1 j=1
3
X
xy = xi yi . (4.9)
i=1

The square of the norm (square of the length) of the vector x is given by
3
X
kxk2 = x x = xi xi . (4.10)
i=1
16 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

1.4.2 Discrete Basis Vectors in Infinite-Dimensional, Com-


plex Space
In a linear, scalar-product space over the complex numbers, it is possible to in-
troduce, in analogy to Fig. 4.1, a basis system in a three-dimensional, complex
linear space for which the coordinates xi of a vector are in general complex
numbers. Without any difficulties this three-dimensional space can be general-
ized to an N -dimensional space. To go from N dimensions to infinite dimensions
is more difficult: the meaning of convergence of infinite sequences must be de-
fined, which means that the topology of the linear space must be defined.
This can be done in many different ways, two of which are the Hilbert space
and the Schwartz space.
In an N -dimensional (or infinite-dimensional) space the basis vectors are
denoted
en = |n) or en = |n > n = 1, 2, 3, ...N or ) . (4.11)
These vectors are again chosen to be orthonormal,

(ei , ej ) = (i|j) = ij . (4.12)

In an N -dimensional (or infinite-dimensional), complex, linear, vector-product


space , there exists an orthonormal basis system. That is, every vector
can be expressed as
NX
or NX
or
= |en )cn = |en )(en |) , (4.13)
n=1 n=1

where the coordinates or components cn = (en |) are complex numbers. To


make the expansions (4.13) mathematically rigorous, theorems are required that
are not proved here. Instead the expansions are simply constructed in analogy
to (4.8).
In analogy with the vectors ei in the three dimensional space <3 that satisfy
(4.5), it is often convenient to choose the basis vectors |en ) to be eigenvectors
of a self-adjoint operator A = A that is of particular physical significance.
The eigenvectors in the N -dimensional (or infinite-dimensional), linear, scalar
product space, are solutions of the eigenvalue equation

A|en ) = an |en ), |en ) , n = 1, 2, 3 . . . . (4.14)

In Example 3.3 it was established that the eigenvalues ai of a self-adjoint oper-


ator A are real and that two eigenvectors |ei ) and |ej ) with different eigenvalues
ai 6= aj are orthogonal. Thus it is possible to normalize2 the eigenvalues in
(4.14) such that (4.12) is fulfilled.
If |ei ) is a normalized eigenvector with eigenvalue ai then

|e0n ) = ei |en ), <, (4.15)


2 If
|ei ) is not normalized and || |ei )|| 6= 0, the new vector |e0i ) = |ei )/|| |ei )|| } is a normal-
ized eigenvector with the same eigenvalue.
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 17

e2

V
v2

v1
e1

Figure 4.3: The two-dimensional vector V with components v1 and v2 along the
respective basis vectors e1 and e2 .

is also a normalized eigenvector with the same eigenvalue ai (See problem ?) so


the solutions of (4.14) are only determined up to a phase factor ei .
Since only the combination |en )(en | appears in (4.13) and

|e0n )(e0n | = ei |en )(en |ei = |en )(en | , (4.16)

the choice of phase (4.15) is of no consequence. The important mathematical


quantities are the projection operators n = |en )(en | that project onto orthog-
onal subspaces,
n m = nm n , (4.17)
and are independent of the phase. The projection operators have the property
that n V = |en )(en |V) = vn |en ),implying that the projection operator n
projects out the component vn of the vector V along the en axis as shown in
Fig. 4.3
Two cases are now distinguished:

1. There is only one normalized eigenvector (up to a phase) for each eigenvalue
of the chosen operator A. The eigenvalues are said to be nondegenerate, and
the the projection operator n projects onto a one-dimensional subspace.

2. There is more than one normalized eigenvector (up to the phase) for at least
one eigenvalue of the operator A. The eigenvalues are said to be degenerate.
A discussion of Case 2 will be postponed until it is needed to describe physical
problems discussed in later sections.

Example 4.2

Show that the functions


2 2 2 2
f1 (x, y) = xex /2
ey /2
and f2 (x, y) = yex /2
ey /2
18 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

both satisfy
i (x, y) = afi (x, y),
Af i = 1, 2,
where the constant a is the eigenvalue of the operator

d2 d2
A = 2
x2 + 2 y 2 .
dx dy
The coordinates x and y range from to .
Solution: Allowing the differential operator A to act on f1 (x, y) and f2 (x, y),
1 (x, y) = 4f1 (x, y),
Af 2 (x, y) = 4f2 (x, y).
Af

If the functions f1 (x, y) and f2 (x, y) are labeled only by their eigenvalue a,

fa=4 (x, y) = f1 (x, y) and fa=4 (x, y) = f2 (x, y).

Since there are two different eigenfunctions fa=4 (x, y), the eigenfunctions of A
are not uniquely specified by the eigenvalue a.

For Case 1 the eigenvector |en ) in (4.11) is uniquely determined (up to a


phase) by its eigenvalue an . Therefore, the vector |en ) can be labeled by the
value an ,
|en ) |an ) n = 1, 2, 3, . . . N . (4.18)
The eigenvalue equation (4.14) then becomes

A|an ) = an |an ), |an ) , n = 1, 2, 3 . . . , (4.19a)


(an |am ) = nm , (4.19b)

and the basis vector expansion (4.13) becomes the eigenvector expansion
NX
or NX
or
= |an )cn = |an )(an |) , (4.20)
n=1 n=1

A complete system of eigenvectors possesses the property that every vector


can be expanded in terms of the |an ) according to the eigenvector
expansion (4.20). Is it possible to find a complete system of eigenvectors |an )
for every self-adjoint operator A in a linear scalar product space ? When is
finite-dimensional, the answer is yes, and when is infinite-dimensional, as
is the case for the Hilbert space, the answer is no.

1.4.3 Continuous Basis Systems of a Linear Space


Some properties of physical systems can be described by operators with a com-
plete set of discrete eigenvectors |an ), n = 1, 2, 3, . . . . The eigenvalues an rep-
resent the values observed in experiments on quantum physical systems. For
example, if the operator A is the energy operator of the hydrogen atom, the
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 19

eigenvalues an are the discrete energy values En = 2R~c/n2 , n = 1, 2, 3, . . . ,


where R is the Rydberg constant.
On the other hand it is clearly impossible to describe all physical systems
with operators that have a discrete spectrum. In addition to the discrete eigen-
values En corresponding to the electron-proton bound states of the hydrogen
atom, an electron interacting with a proton has continuous values of energy cor-
responding to the case where there is no binding and the electron is scattered
by the proton. Also, an operator with a continuous eigenvalue spectrum is re-
quired to describe the momentum p of electrons in the cathode rays that can
have any of a continuous set of values depending on the accelerating potential.
The position x must similarly be described by an operator with a continuous
eigenvalue spectrum.
Thus in addition to the set of discrete eigenvectors H|En ) = En |En ), eigen-
vectors of energy H, momentum P and position Q are required that have con-
tinuous eigenvalues:
H|E >= E|E > , 0 E < , (4.21a)
P |p >= p|p > , p + , (4.21b)
Q|x >= x|x > , x < + or M x N. (4.21c)
These are the eigenkets first introduced by Dirac and are called generalized
eigenvectors and are denoted by the symbol | > to indicate that the spectrum
of the eigenvalue is continuous.
Since the eigenvalue x is continuous, it is not possible to choose generalized
eigenvectors such that their (generalized) scalar product is a Kronecker be-
cause a Kronecker only involves discrete indices. In spite of this mathematical
complication, it is possible to discuss generalized eigenvectors of operators with
a continuous spectrum in analogy to the discrete case. The starting point is the
eigenvector expansion, which was postulated as the Dirac continuous analogue
of (4.20):
Any vector can be expanded in terms of the generalized eigenvectors
|x > of Q according to
Z Z
= dx|x >< x|) dx|x > (x), (x) < x|), (4.22)

where the integral extends over the continuous set of eigenvalues M< x< N and
M , N can be . To make the transition from (4.20) to (4.22), the sum over the
discrete variable n is replaced by a continuous sum (integral) over the continuous
variable x, and the eigenvectors |n) are replaced by the generalized eigenvectors
|x >. Equation (4.22) is called the generalized basis system expansion and is
justified mathematically by the Nuclear Spectral Theorem3 .
The coordinates or components (x) < x|) = (|x >, ) of the vector
with respect to the basis system |x > are the (generalized) scalar product
3 After Dirac introduced (4.22), approximately 30 years elapsed until Distribution Theory

(L. Schwartz (1950-1951)) and the Rigged Hilbert Space (Gelfand et al, K Maurin (1955-59))
provided the mathematics for its justification.
20 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

between the vector and the generalized eigenvector |x > and are complex
numbers just as the coordinates (an |) cn are complex numbers in the dis-
crete case (4.20). Here, however, the coordinates are functions of the continuous
variable x whereas in (4.20) they are continuous, well-behaved (infinitely dif-
ferentiable, rapidly decreasing) functions of of the discrete variable n or, more
precisely, of an .
The progression from (4.8) to (4.22) is a more convincing justification than
the proof of the Nuclear Spectral Theorem, the creation of the mathematics un-
derlying Diracs eigenvector expansion (4.22) had to be possible because Diracs
formalism is so beautifully simple.
For the discrete case, taking the scalar product of as given in 4.20 with
the vector |am ) gives,
X
(|am ), ) (am |) = (am |an )(an |) , (4.23)
n

which implies (am |an ) = mn = am an .


In analogy to (4.23) the (generalized) scalar product of as given in (4.22)
is taken with the generalized eigenvector |x0 > to yield
Z
(|x0 >, ) < x0 |) = dx < x0 |x >< x| > . (4.24)

The quantity < x0 |x > has been written as a (generalized) scalar product of the
generalized basis vectors |x > and |x0 >. In analogy to (am |an ) being equal to
the Kroneker delta, (am |an ) = am an , the generalized scalar product < x0 |x >
is written
< x0 |x >= (x x), (4.25)
where (x x0 ) is called the Dirac- functional.
The Dirac-, (x0 x), is not a (locally integrable) function. Instead it
is a new mathematical quantity called a functional that is defined by (4.24).
Defining the function (x) < x| >, (4.24) becomes
Z
(x0 ) = dx (x0 x) (x). (4.26)

When the product of a Dirac- and a well-behaved function are integrated


over, the Dirac- is the the mathematical object that maps (x), < x <
+, by integration into (x0 ), the value of the function at the position x0 .
ARNO, YOU SHOULD CHECK THE FOLLOWING PARAGRAPH CARE-
FULLY AS I HAD SOME DIFFICULTY READING YOUR NOTES AND I
DONT KNOW THE MATHEMATICS.
The generalized eigenvectors |x > are not vectors in because the (gen-
eralized) scalar product of two generalized eigenvectors is a functional, not a
function. The generalization < x0 |) is not an ordinary scalar product of a
vector with a vector |x0 >. It can, however, be made mathematically precise
as an antilinear functional: Fx () =< |x > is an antilinear functional of the
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 21

vectors and < x| >=< |x > . But for calculations here, generalized
eigenvectors can be treated as if they were proper eigenvectors and < |x >
can be treated as a scalar product provided discrete sums over eigenvalues are
replaced by integrals. The distinction between eigenvectors |an ) and general-
ized eigenvectors |x > has been made here in order to emphasize the precise
mathematical nature of these eigenkets.
The generalized basis vector expansion (4.24) does not hold for all vectors
for which the discrete basis vector expansion is correct but only for a subset
of . This has its counterpart in the fact that (4.26) does not hold for all
(Lebesgue square integrable) functions but only for well behaved (x). (An
example of such functions are those that are continuous, infinitely differentiable
and have derivatives of any order that decrease faster than any inverse power
of x). Although the above discussion has focused on the eigenket |x >, where x
is usually used to represent position, the statements apply equally well for the
eigenkets E > and |p > in (4.21a) and (4.21b), respectively.
The components (x) =< x| >, (p) =< p| >, and (E) =< E| >,
which are also called wave functions, must satisfy certain conditions in quantum
mechanics that will be discussed later.
ARNO, I MOVED THE STATEMENTS REGARDING HILBERT SPACES
AND THE NUCLEAR SPECTRAL THEOREM INTO AN EARLIER PART
OF THIS SECTION, AND I MOVED THE SUMMARY TO THE END OF
THE CHAPTER.

1.4.4 Working with Eigenvectors and Basis Vector Ex-


pansions
The eigenvector expansion


X
= |an )(an |) (4.27)
n=1

associates with every vector an infinite sequence of numbers {(an |) , n =


1, 2, 3 . . . }. The coordinates cn = (an |) are, in general, complex numbers
and are a function of the discrete variable n. The vector = 0 iff all of its
coordinates are zero. Equivalently, the vectors and are equal if all their
coordinates are equal. The set of eigenvectors |an ) in (4.27) is thus a complete
system of eigenvectors. The set of eigenvalues {an , n = 1, 2, ...} is called the
spectrum of the operator A.
In analogy with (4.10), the square of the norm of a vector is given by


X
X
X
2 2 2
|||| (, ) = |cn | = |(an |)| = (an |) (an |) . (4.28)
n=1 n=1 n=1
22 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

If the vector has a finite norm (finite length) then



X
X
|cn |2 = |(an |)|2 < . (4.29)
n=1 n=1

The space of square summable sequences, which is the space of all vectors
with components that fulfill (4.29), is the Hilbert space H.
Using the fact that the identity operator 1 satisfies = 1 and |an ) = 1|an ),
it is possible to omit the vector from both sides of (4.27) because the equation
is true for any and write it as an equation for operators,

X
1 = |an )(an | . (4.30)
n=1

Equation (4.30) is called the completeness relation for the basis system {|an )} or
the spectral resolution of the identity operator. Using (4.30) the scalar product
of two vectors , can be expressed as an infinite sum,

X
X
X
(, ) = (, 1) = (, |an )(an |), = (|an )(an |) = (an |) (an |).
n=1 n=1 n=1
(4.31)
Eq. (4.31) is the analogue of (4.9) in <3 . Here, since the space is complex, the
scalar product is the sum of the products of the components of one vector with
complex conjugate of the components of the other.
Eq. 4.31 makes sense only if the sum converges. It is possible to show that
if (4.29) is fulfilled for all vectors, in particular for the two vectors and in
(4.31) then

X
|(, )| = | (an , ) (an |)| < . (4.32)
n=1

All vectors , for which (4.32) is fulfilled have finite scalar products with each
other and form the space H.
The linear operators A, B, C, . . . , of (3.1b) and (3.2b) act in the space H
and can be added and multiplied according to (3.3b)-(3.5b ), forming an asso-
ciative algebra. As will be justified in later chapters, the vectors , , . . . , H
and the operators A, B, . . . , |)(|, |)(| . . . represent quantum physical states
and observables, respectively. Quantities such as |(, )|2 , |(, A)|, |, A)|,
and |, AB)| are Born probabilities that describe the quantum physical quan-
tities that are extracted from experiments. Since such quantities must be fi-
nite, it is necessary to require not only that (4.29) be finite, but also that
(, Ar ), (, B s ), (, AB r ) . . . , r, s = 1, 2, 3, . . . , have finite absolute values.
A space that is better than a Hilbert space H is required because all
operators A, B, . . . that represent observables for the quantum physical system
under consideration and any arbitrary power r of the operators A, B, . . . must
be well-defined in the space . Thus the vector Ar must also have a finite
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 23

norm. To determine the restriction this imposes, the square of the norm of
Ar , B s is calculated.
Expressing the vector in terms of the eigenvectors |an ) of the operator A
as given in (4.27) and then applying the operator A,

X
X
= A
A |an )(an |) = an |an )(an |) . (4.33)
n=1 n=1

Since (4.33) is true for any , can be omitted in (4.33) just as was done
in arriving at (4.30). Then (4.33) becomes the operator relation

X
X
A = an |an )(an | = a n n . (4.34)
n=1 n=1

Thus a linear operator is the sum of projection operators n = |an )(an | mul-
tiplied by the respective eigenvalues an that are real if A = A . Eq. (4.34) is
called the spectral resolution of the operator A.
An operator B that does not commute with A (i.e. for which B
A AB
6= 0)

cannot be written in terms of the eigenvectors |an ) of the operator A in the form
(4.34) because B|a n ) 6= bn |an ). However, if B
is self-adjoint and has a discrete
spectrum, then it can be expressed in terms of its eigenvectors |bn ) that satisfy
n ) = bn |bn ). Results analogous to (4.30) and (4.34) are then immediately
B|b
obtained:

X X
1= |bn )(bn |, =
B bn |bn )(bn | (4.35)
n=1 n=1
as a basis system, every can as well be written
Using the eigenvectors of B
as
X
= |bn )(bn |) (4.36)
n=1
which, of course, is just (4.27) in a different basis system {|bn ), n = 1, 2, ...}. In
general the |bi ) and |ai ) are completely different vectors. Replacing by |ai ) in
(4.36), each basis vector |ai ) can be expressed as an infinite sum of basis vectors
|bn ),

X
|ai ) = |bn )(bn |ai ). (4.37)
n=1
Using (4.33) it is possible to calculates Ar and B s for any and for r, s =
1, 2, 3, . . . . Using the fact that A and, as a consequence Ar , is self-adjoint,
||Ar ||2 = (Ar , Ar ) = (, A2r ) . (4.38)
Expressing as an infinite sum of eigenstates of the operator A as given in
(4.27),

X
X
||Ar ||2 = (, A2r |an )(an |), = a2r
n (|an )(an |), (4.39)
n=1 n=1
24 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

Thus the vector Ar is defined if the sum



X
a2r 2
n |(|an )| < for every r = 1, 2, . . . . (4.40)
n=1

The norm of the vector B s is calculated similarly.


From the preceding discussion it follows that the space of vectors on which
all powers of operators Ar , B s , . . . are defined is the space of vectors with com-
ponents |(|an )|, |(|bn )| that are rapidly decreasing. These are the vectors of
the space calH for which not only (4.29) holds, but also for which the much
stranger condition (4.40) is fulfilled for all operators A, B, . . . , thereby ensuring
that matrix elements of the form (, Ar B s ) are also defined.
This smaller space
H (4.41)
is by hypothesis the space of states and observables of quantum physical sys-
tems. For these spaces, when mathematically well defined, it is possible to
prove Diracs continuous basis vector expansion (4.22) as the Nuclear Spectral
Theorem. The basis kets |a >, |x >, . . . are not vectors in , but are instead
continuous, antilinear functionals on the space , which is usually denoted x .
Since the space of continuous, antilinear functionals Hx on the Hilbert space
is again a Hilbert space, Hx = H, from (4.41) the triplet (Gelfand triplet or
Rigged Hilbert Space) is obtained,

H x , (4.42)

with thewell-behaved vector , and the Dirac kets |x >, |p >, |E >
, . . . x .
For practical calculations in physics, the underlying mathematics is not so
important. But it is important to know that these mathematical objects are
rigorously defined and to know their limitations and properties, the most impor-
tant of which is the Nuclear Spectral Theorem or Diracs basis vector expansion
(4.22).
The continuous analogue of (4.34), the spectral resolution of the operator
can be obtained by first operating on both sides of (4.22) with Q,
Q,
Z Z
Q = dxQ|x >< x| > dxx|x >< x| > ,

and then omitting the arbitrary vector :


Z
Q = dxx|x >< x| (4.43)

Because (4.22) is true for any , it is possible to omit from from the
equation, Z
1 = dx|x >< x| . (4.44)
1.4. BASIS SYSTEMS AND EIGENVECTOR DECOMPOSITIONS 25

Eq. (4.44) is the continuous analogue of the discrete relation (4.30))and is called
the completeness relation of the generalized basis system {|x >}.
The scalar product of two elements , is then obtained from (4.44),
Z
(, ) (|) = (|1) = dx(|x >< x|). (4.45)

In (4.45)
(|x > (, |x >) = (|x >, ) < x|) (4.46)
is the (generalized) scalar product of with the generalized basis vector
|x >. Using the standard notation (x) =< x|) and (x) = (|x >, (4.45)
can be rewritten in the form
Z
(, ) = dx (x)(x), (4.47)

the familiar form of the scalar product in function spaces (2.12).


Just as there are conditions on the components cn = (an |) for the discrete
case, there are also corresponding conditions on the components (x) =< x|)
for the continuous case. From (4.45), it immediately follows that the square of
the norm of is given by
Z Z
kk2 = (, ) = dx(|x >< x|) = dx|(x)|2 . (4.48)

The components (x) must therefore be square integrable functions for the
norm to be finite. Furthermore, if the operator Q and an arbitrary power r of
the operator Q are to be well-defined in the space , the vector Q
r must also
have a finite norm. Performing a calculation analogous to (4.39),
Z
kQ r k2 = (Q
r , Q
r ) = dx x2r |(x)|2 . (4.49)

If the norm kQ r k is to be finite, from (4.39) it follows that |(x)|2 must


decrease faster than any power of x.
Using (4.21c) the matrix elements of the self-adjoint operator Q with eigen-
kets |x > are
< x|Q| >= (Q|x >, | >) = (x|x >, | >) = x < x| > for all .
(4.50)
An operator P is now sought with matrix elements < x|P | > between the
continuous basis vector |x > and any that are given by
1 d 1 d
< x|P | >= < x| >= (x). (4.51)
i dx i dx
Since an arbitrary power Qr P s (or P s Qr ) is to be a well-defined operator, their
matrix elements must be finite. Thus the components < x| > and < x| >
must fulfill the condition that
Z
(|Q P |) = dx(|Qr |x >< x|P s | >
r s
(4.52)
26 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

exists. Using (4.50) and (4.51)


Z
r s 1 ds
|(|Q P |)| = | dx (x) xr (x)| < (4.53)
is dxs
for all r = 1, 2, ... s = 1, 2, ... and all , .
Eq. (4.53) reveals that the components in the |x >-basis, called the position
wave functions (x) and (x), must fulfill the following condition: The products
of the position wave functions and all their s derivatives must decrease faster
than any power r of x. The infinitely differentiable, rapidly decreasing, smooth
functions that fulfill these conditions are call Schwartz-space functions, and the
space of these functions is called the Schwartz space.
It is possible to calculate the commutator of the operators Q and P defined
by (4.50) (4.51) in the Schwartz space:

< x|[P, Q]| > =< x|P Q QP | > ,


1 d 1 d
= < x|Q| > x < x| > ,
i dx i dx
1 1
= < x| >= < x|1| > (4.54)
i i
The above formula is valid for every function < x| > in Schwartz space, which
means it is valid for every |x > and for every vector . Thus it is true as
an operator equation in ,
1
[P, Q] = 1. (4.55)
i
The correspondence in the following table,
Space Schwartz Function Space of x
corresponds to < x| >
Q corresponds to operator that multiplies by x
P corresponds to differentiation operator 1i dx
d

is called a realization of the space .


It is important to emphasize that no proofs have been given in the above
discussion . Formalism has been presented based on (4.27) and (4.43), which
have been written in analogy to the basis vector expansion in <3 . But the
statements (4.27) and (4.43), which are special cases of the Nuclear Spectral
Theorem, are far from trivial and require proofs. In fact the Nuclear Spectral
Theorem is one of the more important mathematical theorems, with much of
this section being a consequence of it. But long before (4.43) was proved or
even precisely formulated in terms of well-defined mathematical quantities, it
was used successfully by Dirac in his formulation of quantum mechanics.
1.5. REALIZATIONS BY MATRICES AND FUNCTIONS 27

1.5 Realizations by Matrices and Functions


Realizations of linear scalar product spaces and linear operators are now
briefly discussed. To illustrate what is meant by a realization it is conve-
nient to return to the three-dimensional space <3 . There a vector x can be
described by giving its magnitude and direction. Alternatively, it can be speci-
fied by its components xi = ei V. Thus there are two alternate but equivalent
descriptions which are written symbolically as

x1
x2

x3
x xi = (5.1)

.
.
.

The coordinates or components, of course, depend on the chosen basis system.


In the same way, instead of using the vector , it is possible to use the
components (n|) with respect to a discrete basis, which can be written as a
column matrix.

(1|)
(2|)

(3|)
(n|) = (5.2)

.
.
.

Just as a vector in <3 can be expressed in terms of coordinates or components


with respect to different basis systems, it is possible to express in terms of
components with respect to a different basis. Instead of the basis |n), which
are, say, eigenstates of the energy operator H, it is possible to use as a basis
eigenvectors |an ) of the operator A satisfying A|a
n ) = an |an ). In terms of |an ),
the same vector is given by an entirely different column matrix.

(a1 |)
(a2 |)

(a3 |)
(an |) = (5.3)
.

.
.

The column matrices (n|) and (an |) are related by an infinite-dimensional


transformation matrix, which is usually so complicated that it is of no practical
value. (See problem 13.) If the infinite column matrix appears more real to
someone than the abstract vector , then that person will speak of a realization
of by a column matrix and a realization of the space by the space of column
matrices.
28 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

When vectors are realized by column matrices, operators are realized by


quadratic matrices. To illustrate this concept, the action of an arbitrary oper-
on the vector is calculated using the expansion (4.13),
ator B

X
=
B
B|n)(n|). (5.4)
n=1

Taking the scalar product of the above equation with the basis vector |m) yields

X
=
(m|B)
(m |B|n)(n|). (5.5)
n=1

The above relation can be written in matrix notation as follows:



(1|B) (1|B|1) (1|B|2) (1|B|3) ... (1|)
(2|B)
(2|B|1) ...
(2|B|2) (2|B|3) (2|)
(3|B)
(3|B|1) (3|B|2) (3|B|3) ... (3|)


. = . . . . (5.6)

. . . . .
. . . . .

The numbers (m|B|n) form an infinite-dimensional quadratic matrix which is


with respect to the basis |n). An orthonormal
called the matrix of the operator B
basis is almost always used in calculating the matrix of an operator.
For any two vectors , and an operator B, the scalar product of B


with ,namely (|B), plays an important role in physics and is called the

matrix element of the operator Bbetween the vectors and .
The vector can be expanded in terms of a continuous basis instead of
a discrete basis. Then instead of the correspondence (5.2),

< x|). (5.7)

The column matrix < x|) has continuously infinite rows with one row for each
value of x. With the association (5.7), the space is realized by the space of
well-behaved functions.
1.6. SUMMARY 29

1.6 Summary
Quantum mechanics can be expressed in terms of differential operators using
the Schrodinger equation. Equivalently, it can be expressed in terms of matrices
using matrix mechanics. These two formulations of quantum mechanics are
not distinct theories but are merely two different representations of quantum
mechanics obtained from the general formulation by taking matrix elements with
respect to different basis systems. In its most general form, quantum mechanics
is formulated in terms of linear operators on a linear, scalar-product space.
A linear space possesses the following ten properties where , , and
are elements or vectors in ; O is the null element; and a, b are complex
numbers:

1. + = +
2. Addition is associative: ( + ) + = + ( + )
3. The null element satisfies 0 + =
4. b() = b
5. a(b) = (ab)
6. 1 =
7. 0 = 0
8. b( + ) = b + b
9. (a + b) = a + b
10. 1 =

A linear, scalar-product space possesses the ten properties that character-


ize a linear space plus the following four properties that define a scalar-product
(, ) on a linear space:

1. (, ) 0, (, ) = 0 if = 0
2. (, ) = (, )
3. a(, ) = (, a)
4. ( + , ) = (, ) + (, )

Linear operators A and B


on a linear, scalar-product space possess the
following nine properties:

1.
A()
A
2. + ) = A
A( + A
3.
A(a)
= a(A)
4. (A + B)
= A + B
5. = a(A)
(aA)
6. (AB)
= A( B)

7. The zero operator 0 satisfies 0 = 0 .
8. The identity operator 1 satisfies 1 = .
9. The adjoint of the operator A, denoted A , is defined by (, A)
= (A , ).
30 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

The basis vector expansion for quantum theory has been motivated by
starting with the simplest case and then generalizing. First a vector in three-
dimensional spsce is expanded in terms of its components,
3
X 3
X
x= ei x i = ei (ei |x),
i=1 i=1

where xi is real. Just as vectors in three-dimensional space can be expressed as


components along various sets of three linearly-independent axes, vectors in a
linear, scalar-product space can also be expressed as components along various
sets of linearly independent axes or vectors. In a complex, N -dimensional, linear
scalar-product space the above equation is generalised to
N
X N
X
= |ei )ci = |ei )(ei |) ,
i=1 i=1

where ci is complex. Finally, the above equation is generalised to infinite di-


mensions, N ,

X
X
= |en )cn = |en )(en |) ,
n=1 n=1

where the cn are complex. The set of all vectors with components cn that are
square summable,
X
|cn |2 < ,
n=1

is called the Hilbert space H.


For finite-dimensional spaces, a set of basis functions {|ei )} can be formed
from the eigenvectors of any self-adjacent operator A:

|ei ) = |ai ) where A|ai ) = ai |ai ).

For the infinite-dimensional spaces there are some operators that do not have
a discrete set of eigenvectors (called discrete spectrum). Then there exists a
continuous set of eigenvectors satisfying

A|a >= a|a >, M a N.

The vector can be expanded in terms of |a >,


Z
= da|a >< a|),

which is justified mathematically by the Nuclear Spectral Theorem and is the


continuous generalization of (1.6).
1.7. PROBLEMS 31

1.7 Problems
Section 2
1. Consider the two functions g1 (x) and g2 (x) on the interval x < ,
g1 (x) = A1 e|x| , g2 (x) = A2 (a + x2 )e|x| ,
where the constants A1 and A2 are real.

(a) Determine the constant a such that g1 (x) and g2 (x) are orthogonal.
(b) Determine the constants A1 and A2 such that g1 (x) and g2 (x) are
normalized.

2. Show that if the vectors , are represented by the respective


column matrices

1 1
2 2

3 3

. , . ,

. .

. .
N N
where the i and i are complex numbers, then the scalar product defined
by
N
X
(, ) = i i
i=1

satisfies the rules (2.13c) - (2.16c).

Section 3
3. Show that rules (3.1b) - (3.5b) are satisfied if the vectors ,
are represented by column matrices as given in problem 3, and that an
arbitrary linear operator A is represented by an N N matrix Aij such
that the action of A on is represented by

X
=
A Aij j .
j=1

4. If the vectors in a scalar-product space are represented by column matrices,


and the operator A is represented by the matrix Aij as given in problems
3 and 4, what is the operator A represented by? (Hint: Use a procedure
similar to that employed in obtaining (3.6a).)
32 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

5. Using (3.5b) and the definition (3.6b) of the adjoint of an operator, show
that (AB) = B A .
6. Let A and B
be Hermitian operators and c be an arbitrary, complex num-
ber. Under what conditions is each of the following operators Hermitian?

(a) cA
(b) cA + cB

(c) cAB

B}
(d) c{A, = c(AB
+B
A)

B]
(e) c[A, = c(AB
B
A)


7. Calculate the four possible matrix elements of the operator i x between
the functions
2 2
f1 (x) = 1/4 ex /2 and f2 (x) = 2 1/4 xex /2 ,
where < x < .
where
8. On the interval < x < , consider the operator A,
d2
A = x2 ,
dx2
and the two functions
2 2
1 (x) = ex /2
, 2 (x) = xex /2 .
(a) Show that A = A .
(b) Show that
i (x) = ai i (x), i = 1, 2 .
A
The constant ai is called the eigenvalue of the operator A when it

acts on i (x) and i (x) is called an eigenfunction of the operator A.
What are the values a1 and a2 ?
(c) Using only the facts that A = A and a1 6= a2 , explain why the scalar
product of 1 (x) and 2 (x) must be zero. Explicitly calculate the
scalar product and verify that it is indeed zero.

Section 4
9. Let be normalized to unity. If |n) is a basis system of eigenvectors
of an observable with a discrete spectrum, show that the components of
with respect to this basis fulfill the condition
X
|(n|)|2 = 1 .
n
1.7. PROBLEMS 33

10. Let A be a Hermitian operator and |n) be a discrete basis system. Show
that the matrix of A with respect to this basis system,

(1|A|1) (1|A|2)...
(2|A|1) (2|A|2)...

. . ,

. .
. .


is a Hermitian matrix. That is, show that (m|A|n)
= (n|A|m)
.
11. A matrix T is said to be orthogonal if T t = T 1 . Here T t is the transposed
t
matrix, Tmn = Tnm , and T 1 is the inverse matrix defined by
X
Tnr (T 1 )rm = nm .
r

(a) Show that the matrix



cos sin 0
T = sin cos 0 ,
0 0 1
is orthogonal.
(b) Calculate the determinant of T .
(c) Apply the matrix T to the vector

x
r = y
z

using the rules of matrix multiplication. Using a sketch, show that


T r is the vector obtained by rotating the vector r around the z axis
by an angle .
12. A matrix P is called a projection matrix if

P2 = P .

(a) Show that the matrix



sin2 sin cos 0
P = sin cos
0
cos2 0
0 0 1

is a projection matrix.
(b) Apply the projection matrix P 0 to the vector r0 = T r obtained in
problem 11 and discuss the result. (See problem 11c.)
34 CHAPTER 1. MATHEMATICS OF QUANTUM MECHANICS

(c) If the rotation T is applied to the vector r0 (See problem 11.), a


simple result is found that suggests defining a new matrix

P = T 1 P 0 T .

Calculate P and verify that it is a projection matrix.


(d) Describe the geometrical meaning of the matrix P. In particular,
determine the subspace of the three-dimensional space on which P
projects.
(e) Describe the geometrical meaning of the matrix P 0 and determine
the subspace upon which it projects.
13. Let two basis systems of the linear, scalar-product space be denoted
by |an ) and by |b ). Show that the components of a vector with
respect to the basis system |b ), the (b |), can be obtained from the
components (an |) with respect to the other basis system |an ) by the
matrix transformation
X
(b |) = (b |an )(an |) ,
n

where (b |an ) is the scalar product of the basis vector |b ) with the basis
vector |an ). To show this, use the fact that every vector |an ) can be
expanded with respect to the basis system |b ).

Potrebbero piacerti anche