Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

A Mathematical Companion to Quantum Mechanics
A Mathematical Companion to Quantum Mechanics
A Mathematical Companion to Quantum Mechanics
Ebook580 pages3 hours

A Mathematical Companion to Quantum Mechanics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This original 2019 work, based on the author's many years of teaching at Harvard University, examines mathematical methods of value and importance to advanced undergraduates and graduate students studying quantum mechanics. Its intended audience is students of mathematics at the senor university level and beginning graduate students in mathematics and physics.
Early chapters address such topics as the Fourier transform, the spectral theorem for bounded self-joint operators, and unbounded operators and semigroups. Subsequent topics include a discussion of Weyl's theorem on the essential spectrum and some of its applications, the Rayleigh-Ritz method, one-dimensional quantum mechanics, Ruelle's theorem, scattering theory, Huygens' principle, and many other subjects.
LanguageEnglish
Release dateMar 20, 2019
ISBN9780486839820
A Mathematical Companion to Quantum Mechanics

Read more from Shlomo Sternberg

Related to A Mathematical Companion to Quantum Mechanics

Titles in the series (100)

View More

Related ebooks

Physics For You

View More

Related articles

Reviews for A Mathematical Companion to Quantum Mechanics

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    A Mathematical Companion to Quantum Mechanics - Shlomo Sternberg

    Mechanics

    Chapter 1

    Introduction

    Over the years, I taught Theory of Functions of a Real Variable at Harvard many times. In addition to standard material such as functional analysis, measure, and integration theory, I included elementary mathematics for quantum mechanics. I thought it would be useful to extract this material and gather it together.

    There are many books on this subject. The closest competitor is the excellent book Mathematical Methods in Quantum Mechanics by Teschl [25], but my approach to many topics is sufficiently different to warrant the effort of organizing this book.

    Of course, the multivolume book by Reed and Simon [17] is the classic text on the subject, which goes far beyond anything I cover, and is absolutely necessary for any serious student.

    I don’t deal with the philosophical and mathematical foundations of quantum mechanics for which I refer to Mackey’s classic [16]. This book presents mathematical methods.

    As this book is assembled from many lectures, it is not in linear order. It should be regarded as a potpourri of various topics. Nevertheless, here is a rough outline:

    A key ingredient is the spectral theorem for self-adjoint operators. For example, the first proof for possibly unbounded self-adjoint operators occurs in Wintner’s book of 1929 [27], and the subtitle of the book is Introduction to the Analytical Apparatus of Quantum Theory. I reproduce the frontispiece to his book at the end of Chapter 12. I know at least eight different proofs of this theorem. I derive the spectral theorem from the Fourier inversion formula, which says that (for nice functions) f,

    denotes the Fourier transform of f. If we replace x by A and write U (t) instead of eitA, this suggests that we define

    which has nice properties, as we check in Chapter 3. The problem with this approach is how to make sense of eitA. For bounded operators A, we can use the power series for the exponential. But quantum mechanics requires unbounded operators for which the power series makes no sense. In Chapter 4, I explain unbounded operators, and define their resolvents and spectra. In Chapter 6, I give the rather subtle definition of a (possibly unbounded) self-adjoint operator. Chapters 5, 7, and 8 are devoted to the study of semi-groups and their generators, culminating with the celebrated Hille-Yosida theorem. We derive Stone’s theorem on the nature of unitary one parameter groups on a Hilbert space as a special case of the Hille-Yosida theorem. From Stone’s theorem we get the spectral theorem in functional calculus form, at least for continuous functions f vanishing at infinity. In this chapter I also present the important Dynkin-Helffer-Sjöstrand formula for the functional calculus. The main points of these first eight chapters are summarized at the end of Chapter 8.

    The next big ticket item is Weyl’s 1909 theorem [26] on the stability of the essential spectrum. This general theorem is proved in Chapter 9 and worked out in detail for the square well Hamiltonian of elementary quantum mechanics. Chapter 10 involves more discussion on the details of Weyl’s theorem. One important consequence is to a Schrödinger operator of the form H = H0 + V, where H0 is the free Hamiltonian and V is a potential. Weyl’s theorem implies that if V (x) → ∞ as x → ∞ then there is no essential spectrum so the spectrum of H consists entirely of eigenvalues of finite multiplicity, while if V (x) → 0 as x → ∞ then the essential spectrum of H is the same as that of H0.

    In is limited to continuous functions vanishing at infinity. We need to extend it to bounded Borel functions. (For those who don’t know what these are, see section 26.1 below.) In Chapter 11, I use the Riesz representation theorem [19] to get the needed extension.

    In Chapter 12, I present Wintner’s proof, which gives the functional calculus directly for bounded Borel functions. I did not do the Wintner approach from the start because it depends on the Borel transformand the Stieltjes inversion formula, which are not as well known (or as standard) as the Fourier transform and its inversion formula.

    Chapter 13 is devoted to the L2 version of the spectral theorem, which says that any self-adjoint operator is unitarily equivalent to multiplication by a real function on a measure space. In contrast to the functional calculus versions, this isomorphism is highly non-canonical, but it allows us to discuss fractional powers of a non-negative self-adjoint operator.

    Chapter 14 is devoted to the celebrated Rayleigh-Ritz approximation method for eigenvalues, with applications to chemical theories.

    Chapters 15 and 16 are devoted to quantum mechanics in one dimension, with some results going back to the work of Sturm circa 1850.

    Chapter 17 does some three-dimensional computations.

    Chapter 18 is devoted to Ruelle’s theorem: It is a truism in atomic physics or quantum chemistry courses that the eigenstates of the Schrödinger operator for atomic electrons are the bound states—the ones that remain bound to the nucleus—and that the scattering states which fly off in large positive or negative times correspond to the continuous spectrum. Ruelle’s theorem gives a mathematical justification for this truism.

    At the other extreme, Agmon’s theorem says that under appropriate conditions the eigenvectors (which correspond to bound states) die off exponentially at infinity. Chapter 19 gives a watered down version of Agmon’s theorem.

    In Chapter 20 I return once more to the spectral theorem, this time giving Lorch’s proof which makes use of a beautifully complex variable style calculus due Riesz and Dunford.

    Chapters 21 to 23 give a smattering of quantum mechanical scattering theory, including a chapter on Hughens’ principle of wave mechanics.

    Chapter 24 gives the Groenewold-van Hove Theorem, which shows that Dirac’s proposal that quantization gives a correspondence between the Poisson bracket of classical mechanics and the commutator bracket of quantum mechanics does not work beyond quadratic polynomials.

    Chapter 25 returns to the subject of semigroups and presents an important theorem of Chernoff and some of its consequences.

    Chapter 26 gives some background material.

    The first two chapters of [21] provide enough background in real variable theory for this book. In fact, I assume much less and provide a lot of the background, including a primer on Hilbert spaces in Chapter 26. I suggest [25] as a good source for background material. Both [21] and [25] are available gratis on the web. I use the uniform boundedness principle (also known as the Banach-Steinhaus Theorem) several times. A good treatment can be found on page 39 of [25] and on page 98 of [21]. I use the Stone-Weierstrass theorem (see [25] pp. 59–61) several times.

    I use the Cauchy integral formula and Cauchy’s theorem (which can be found near the opening of any text on complex variable theory) throughout the book.

    I thank Chen He and Rob Lowry for proofreading the text.

    Chapter 2

    The Fourier Transform

    I will start with standard facts about the Fourier transform from which I will derive the spectral theorem for bounded self-adjoint operators in Chapter 3.

    2.0.1 Conventions

    The space S consists of all complex valued functions on ℝ that are infinitely differentiable and vanish at infinity rapidly with all their derivatives in the sense that

    The ∥·∥m,n give a family of semi-norms on S making S into a Fréchet space—that is, a vector space whose topology is determined by a countable family of semi-norms.

    We use the measure

    on ℝ and so define the Fourier transform of an element of S by

    and the convolution of two elements of S by

    2.1 Basic Facts about the Fourier Transform Acting on S

    We are allowed to differentiate

    with respect to ξ under the integral sign since f (x) vanishes so rapidly at∞. We get

    So the Fourier transform of

    Integration by parts (with vanishing values at the end points) gives

    Putting these two facts together gives

    Theorem 2.1.1. The Fourier transform is well defined on S and

    This follows by differentiation under the integral sign and by integration by parts, showing that the Fourier transform maps S to S.

    Convolution goes to multiplication.

    so

    Scaling.

    For any f S and a > 0 define Sa f by (Sa) f (x) f (ax). Then setting u = ax so dx = (1/a)du we have

    so

    The Fourier transform of a Gaussian is a Gaussian. The polar coordinate trick evaluates

    The integral

    converges for all complex values of η, uniformly in any compact region. Hence it defines an analytic function of η that can be evaluated by taking η to be real and then using analytic continuation.

    For real η we complete the square and make a change of variables:

    Setting η = i ξ gives

    We will make much use of this equation in what follows.

    Scaling the unit Gaussian.

    If we set a = ∈ in our scaling equation and define ρ∈ ≔ S∈n so

    then

    Notice that for any g S we have (by a change of variables)

    so setting a = ∈ we conclude that

    for all ∈.

    Let

    and

    Then

    so

    An approximation. Since g S it is uniformly continuous on ℝ, for any δ > 0 we can find 0 so that the above integral is less than δ in absolute value for all 0 < ∈ < ∈0. In short,

    The multiplication formula. This says that

    for any f, g S. Indeed the left-hand side equals

    We can write this integral as a double integral and then interchange the order of integration, which gives the right-hand side.

    The inversion formula. This says that for any f S

    Proof. First observe that for any h S as follows directly from the definition. Taking g(x) in the multiplication formula gives

    We know that the right-hand side approaches

    for each fixed t, and in fact uniformly on any bounded t for all t. So choosing the interval of integration large enough, we can take the left-hand side as close as we like to

    by then choosing ∈ sufficiently small.

    Plancherel’s theorem. This says that

    Proof. Let

    is given by

    so

    Thus

    and evaluated at 0 gives

    The left-hand side of this equation is

    Thus we have proved Plancherel’s formula

    The Poisson summation formula. This says that for any g S we have

    Proof. Let

    so h is a smooth function, periodic of period 2π and

    Expand h into a Fourier series

    where

    Setting x = 0 in the Fourier expansion

    gives

    The Shannon sampling theorem.

    Let f S be such that its Fourier transform is supported in the interval [−π, π]. Then a knowledge of f (n) for all n ∈ ℤ determines f. This theorem is the basis for all digital sampling used in information technology. More explicitly,

    Proof. Let g be the periodic function (of period 2πthe Fourier transform of f. So

    and is periodic.

    Expand g into a Fourier series:

    where

    or, by the Fourier inversion formula,

    But

    Replacing n by −n in the sum, and interchanging summation and integration, which is legitimate since the f (n) decrease very fast, this becomes

    But

    Rescaling the Shannon sampling theorem. It is useful to reformulate this via rescaling so that the interval [−π, π] is replaced by an arbitrary interval symmetric about the origin: In the engineering literature, the frequency λ is defined by

    Suppose we want to apply (2.1) to g = Sa f. We know that the Fourier transform of g is (1/a)S1/a and

    So if

    we want to choose a so that a2πλc π or

    For a in this range (2.1) says that

    or setting t = ax,

    The Nyquist rate. This holds in L2 under the assumption that f ⊂ [−2πλc, 2πλc]. We say that f has finite bandwidth or is band-limited with bandlimit λc. The critical value ac = 1/2λc is known as the Nyquist sampling interval and (1/a) = 2λc is known as the Nyquist sampling rate. Thus the Shannon sampling theorem says that a band-limited signal can be recovered completely from a set of samples taken at a rate ≥ the Nyquist sampling rate.

    2.1.1 The Fourier Transform on L 2

    Recall that Plancherel’s formula says that

    Define L2(ℝ) to be the completion of S with respect to the L2 norm given by the left-hand side of the above equation. Since S is dense in L2(ℝ) we conclude that the Fourier transform extends to unitary isomorphism of L2(ℝ) onto itself.

    The Heisenberg Uncertainty Principle. Let f ∈ (ℝ) with | f (x)|²dx = 1. as a probability density on the line. The mean of this probability density is

    If we take the Fourier transform, then Plancherel says that

    as well, so it defines a probability density with mean

    Suppose for the moment that these means both vanish. The Heisenberg Uncertainty Principle says that

    In other words, if Var(f) denotes the variance of the probability density | fthen

    Proof. Write −iξ f (ξ) as the Fourier transform of f' and use Plancherel to write the second integral as | f (x)|²dx. Then the Cauchy-Schwarz inequality says that

    the left-hand side is ≥ the square of

    The general case. If f has norm one but the mean of the probability density | f |² is not necessarily zero (and similarly for its Fourier transform), the Heisenberg uncertainty principle says that

    The general case is reduced to the special case by replacing f (x) by

    2.2 Tempered Distributions

    The topology on S. The space S was defined to be the collection of all smooth functions on ℝ such that

    The collection of these norms defines a topology on S which is much finer that the L2 topology: We declare that a sequence of functions {fk} approaches g S if and only if

    for every m and n.

    A linear function on S which is continuous with respect to this topology is called a tempered distribution.

    The space of tempered distributions is denoted by S. For example, every element f S defines a linear function on S by

    But this last expression makes sense for any element f L2(ℝ), or for any piecewise continuous function f which grows at infinity no faster than any polynomial. For example, if f ≡ 1, the linear function associated to f assigns to ϕ the value

    This is clearly continuous with respect to the topology of S but this function of ϕ does not make sense for a general element ϕ of L2(ℝ).

    The Dirac delta function. Another example of an element of S' is the Dirac δ-function which assigns to ϕ S its value at 0. This is an element of S' but makes no sense when evaluated on a general element of L2(ℝ).

    Defining the Fourier transform of a tempered distribution. If f Ssatisfies

    But we can now use this equation to define the Fourier transform of an arbitrary element of S': If S' we define F () to be the linear function

    Examples of Fourier transforms of elements of S'.

    The Fourier transform of the constant 1. If corresponds to the function f ≡ 1, then

    So the Fourier transform of the function which is identically one is the Dirac δ-function.

    The Fourier transform of the δ function. If δ denotes the Dirac δ-function, then

    So the Fourier transform of the Dirac δ function is the function which is identically one. In fact, this last example follows from the preceding one: If m = F () then

    But

    So if m = F(ℓ) where

    The Fourier transform of the function x. This assigns to every ψ S the value

    For an element of S we have

    So we define the derivative of an S' by

    2.3 The Laplace Transform

    Definition of the (one-sided) Laplace transform. The inversion problem. Let f be a (possibly vector valued) bounded piecewise differentiable function, so that the integral

    converges for z F is called the Laplace transform of f. The inversion problem is to reconstruct f from F.

    The Laplace transform as a Fourier transform. Let f be a bounded piecewise differentiable function defined on [0,∞). Let c > 0, z = c +i ξ and h the function

    given by

    Then h is integrable and

    If the function F , then the Fourier inversion formula would say that

    The condition that the function F be integrable over Г, which is the same as the condition that ĥ be integrable over ℝ, would imply that h is continuous. But h will have a jump at 0 (if f (0) ≠ 0). So we need to be careful about the above formula expressing f in terms of F.

    The philosophy we have been pushing up until now has been to pass to tempered distributions. But for applications that I have in mind (to the theory of semi-groups of operators) later on in this book, I need to go back to 19th century mathematics—more precisely to the analogue for the Fourier transform of Dirichlet’s theorem about Fourier series:

    Dirichlet proved the convergence of the symmetric to

    under the assumptions that the periodic function f is piecewise differentiable. The analogue of the limit of the symmetric sum for the case of an integral is the Cauchy principal value:

    Theorem 2.3.1. Let h L(ℝ) be bounded and such that there is a finite number of real numbers a1, . . . , ar such that h is differentiable on (−∞, a1), (a1, a2), . . . , (ar, ∞) with bounded derivative (and right- and left-handed derivatives at the end points). Then for any x ∈ ℝ we have

    In the proof of this theorem we may take x = 0 by a shift of variables. So we want to evaluate the limit of

    where the interchange of the order of integration in the first equation is justified by the assumption that h is absolutely integrable.

    The function H is absolutely integrable so we can find a large q depending on but independent of R so that

    We will use the evaluation of the Dirichlet integral

    There are many ways of establishing this classical result. For a proof using integration by parts, see Wikipedia under Dirichlet integral. An alternative proof can be given via a contour integral. In fact, this evaluation will also be a consequence of what follows: it is clear that the integral converges, so if we let k denote the value and carry the k since the above formula is a special case of our Laplace inversion formula for the Heaviside function.

    The proof of the theorem will proceed by integration by parts: Let

    We are interested in studying

    as R →∞. Now H is piecewise differentiable with a finite number of points of non-differentiablity where the right- and left-handed derivatives exist as in the theorem. Break the integral on the right up into the sum of the integrals over intervals of differentiability.

    For example, the last integral will contribute

    Integration by parts gives

    We are assuming that H and Hare bounded. Since s(Ru) →0 as R →∞and s(Rq) →0 as q →∞, the contribution of these terms tend to zero, assuming that

    Enjoying the preview?
    Page 1 of 1