Sei sulla pagina 1di 202

Roman Schmied

Using
Mathematica
for Quantum
Mechanics
A Student’s Manual
Using Mathematica for Quantum Mechanics
Roman Schmied

Using Mathematica
for Quantum Mechanics
A Student’s Manual

123
Roman Schmied
Department of Physics
University of Basel
Basel, Switzerland

Additional material to this book can be downloaded from http://extras.springer.com.

ISBN 978-981-13-7587-3 ISBN 978-981-13-7588-0 (eBook)


https://doi.org/10.1007/978-981-13-7588-0
© Springer Nature Singapore Pte Ltd. 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

The limits of my language mean the limits of my world.


Ludwig Wittgenstein

Learning quantum mechanics is difficult and counter-intuitive. The first lectures I


heard were filled with strange concepts that had no relationship with the mechanics
I knew, and it took me years of solving research problems until I acquired even a
semblance of understanding and intuition. This process is much like learning a new
language, in which a solid mastery of the concepts and rules is required before new
ideas and relationships can be expressed fluently.
The major difficulty in bridging the chasm between introductory quantum
lectures, on the one hand, and advanced research topics, on the other hand, was for
me the lack of such a language, or of a technical framework in which quantum
ideas could be expressed and manipulated. On the one hand, I had the hand tools
of algebraic notation, which are understandable but only serve to express very
small systems and ideas; on the other hand I had diagrams, circuits, and
quasi-phenomenological formulae that describe interesting research problems, but
which are difficult to grasp with the mathematical detail I was looking for.
This book is an attempt to help students transform all of the concepts of quantum
mechanics into concrete computer representations, which can be constructed,
evaluated, analyzed, and hopefully understood at a deeper level than what is
possible with more abstract representations. It was written for a Master’s and Ph.D.
lecture given yearly at the University of Basel, Switzerland. The goal is to give a
language to the student in which to speak about quantum physics in more detail,
and to start the student on a path of fluency in this language. We will revisit most
of the problems encountered in introductory quantum mechanics, focusing on
computer implementations for finding analytical as well as numerical solutions and
their visualization. On our journey we approach questions such as
• You already know how to calculate the energy eigenstates of a single particle in
a simple one-dimensional potential. How can such calculations be generalized to
non-trivial potentials, higher dimensions, and interacting particles?

v
vi Preface

• You have heard that quantum mechanics describes our everyday world just as
well as classical mechanics does, but have you ever seen an example where such
behavior is calculated in detail and where the transition from classical to
quantum physics is evident?
• How can we describe the internal spin structure of particles? How does this
internal structure couple to the particles’ motion?
• What are qubits and quantum circuits, and how can they be assembled to
simulate a future quantum computer?
Most of the calculations necessary to study and visualize such problems are too
complicated to be done by hand. Even relatively simple problems, such as two
interacting particles in a one-dimensional trap, do not have analytic solutions and
require the use of computers for their solution and visualization. More complex
problems scale exponentially with the number of degrees of freedom, and make the
use of large computer simulations unavoidable.
The methods presented in this book do not pretend to solve large-scale
quantum-mechanical problems in an efficient way; the focus here is more on
developing a descriptive language. Once this language is established, it will provide
the reader with the tools for understanding efficient large-scale calculations better.

Why Mathematica?

This book is written in the Wolfram language of Mathematica (version 11);


however, any other language such as MATLAB or Python may be used with
suitable translation, as the core ideas presented here are not specific to the Wolfram
language.
There are several reasons why Mathematica was chosen over other
computer-algebra systems:
• Mathematica is a very high-level programming environment, which allows the
user to focus on what s/he wants to do instead of how it is done. The Wolfram
language is extremely expressive and can perform deep calculations with very
short and unencumbered programs.
• Mathematica supports a wide range of programming paradigms, which means
that you can keep programming in your favorite style. See Sect. 1.9 for a
concrete example.
• The Notebook interface of Mathematica provides an interactive experience that
holds programs, experimental code, results, and graphics in one place.
• Mathematica seamlessly mixes analytic and numerical facilities. For many
calculations it allows you to push analytic evaluations as far as possible, and
then continue with numerical evaluations by making only minimal changes.
• A very large number of algorithms for analytic and numerical calculations is
included in the Mathematica kernel and its libraries.
Preface vii

Mathematica Source Code

Some sections of this book contain Mathematica source code files that can be
downloaded through given hyperlinks. See the section “List of Notebooks in
Supplementary Material” for a list of embedded files.

Outline of Discussed Topics

In five chapters, this book takes the student all the way to relatively complex
numerical simulations of quantum circuits and interacting particles with spin:
Chapter 1 gives an introduction to Mathematica and the Wolfram language, with
a focus on techniques that will be useful for this book. This chapter can be safely
skipped or replaced by an alternative introduction to Mathematica.
Chapter 2 makes the connection between quantum mechanics and vector/matrix
algebra. In this chapter, the abstract concepts of quantum mechanics are converted
into computer representations, which form the basis for the following chapters.
Chapter 3 discusses quantum systems with finite-dimensional Hilbert spaces,
focusing on spin systems and qubits. These are the most basic quantum-mechanical
elements and are ideal for making a first concrete use of the tools of Chap. 2.
Chapter 4 discusses the quantum mechanics of particles moving in one-and
several-dimensional space. We develop a real-space description of these particles’
motion and interaction, and stay as close as possible to the classical understanding
of particle motion in phase space.
Chapter 5 connects the topics of Chaps. 3 and 4, describing particles with spin
that move through space.

Basel, Switzerland Roman Schmied


Contents

1 Wolfram Language Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Variables and Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Immediate and Delayed Assignments . . . . . . . . . . . . . . 4
1.2.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Four Kinds of Bracketing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Prefix and Postfix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Programming Constructs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.1 Procedural Programming . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.3 Functional Programming . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Function Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6.1 Immediate Function Definitions . . . . . . . . . . . . . . . . . . . 11
1.6.2 Delayed Function Definitions . . . . . . . . . . . . . . . . . . . . 12
1.6.3 Memoization: Functions that Remember Their
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.4 Functions with Conditions on Their Arguments . . . . . . . 13
1.6.5 Functions with Optional Arguments . . . . . . . . . . . . . . . 14
1.7 Rules and Replacements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7.1 Immediate and Delayed Rules . . . . . . . . . . . . . . . . . . . . 16
1.7.2 Repeated Rule Replacement . . . . . . . . . . . . . . . . . . . . . 16
1.8 Debugging and Finding Out How Mathematica Expressions
are Evaluated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 17
1.8.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 18

ix
x Contents

1.9 Many Ways to Define the Factorial Function . . . . . . . . . . . . . . . 19


1.9.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.10 Vectors, Matrices, Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.10.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.10.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.10.3 Sparse Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . 23
1.10.4 Matrix Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.10.5 Tensor Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.10.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.11 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.12 Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2 Quantum Mechanics: States and Operators . . . . . . . . . . . . . . . . . . . 33
2.1 Basis Sets and Representations . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.1.1 Incomplete Basis Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2 Time-Independent Schrödinger Equation . . . . . . . . . . . . . . . . . . 36
2.2.1 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3 Time-Dependent Schrödinger Equation . . . . . . . . . . . . . . . . . . . 38
2.3.1 Time-Independent Basis . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.2 Time-Dependent Basis: Interaction Picture . . . . . . . . . . . 40
 
^
2.3.3 Special Case: HðtÞ; ^ 0 Þ ¼ 08ðt; t0 Þ . . . . . .
Hðt . . . . . . . . 41
2.3.4 Special Case: Time-Independent Hamiltonian . . . . . . . . . 41
2.3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4 Basis Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.1 Description of a Single Degree of Freedom . . . . . . . . . . 42
2.4.2 Description of Coupled Degrees of Freedom . . . . . . . . . 43
2.4.3 Reduced Density Matrices . . . . . . . . . . . . . . . . . . . . . . 46
2.4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3 Spin and Angular Momentum . . . . . . . . . . . . . . . . . . . . ......... 51
3.1 Quantum-Mechanical Spin and Angular Momentum
Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Spin-1/2 Electron in a DC Magnetic Field . . . . . . . . . . . . . . . . . 54
3.2.1 Time-Independent Schrödinger Equation . . . . . . . . . . . . 55
3.2.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure . . . . . . . . . . . . 56
3.3.1 Eigenstate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.2 “Magic” Magnetic Field . . . . . . . . . . . . . . . . . . . . . . . . 61
Contents xi

3.3.3 Coupling to an Oscillating Magnetic Field . . . . . . . . . . . 61


3.3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.4 Coupled Spin Systems: Ising Model in a Transverse Field . . . . . 68
3.4.1 Basis Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4.2 Asymptotic Ground States . . . . . . . . . . . . . . . . . . . . . . 70
3.4.3 Hamiltonian Diagonalization . . . . . . . . . . . . . . . . . . . . . 71
3.4.4 Analysis of the Ground State . . . . . . . . . . . . . . . . . . . . 72
3.4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.5 Coupled Spin Systems: Quantum Circuits . . . . . . . . . . . . . . . . . 80
3.5.1 Quantum Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.5.2 A Simple Quantum Circuit . . . . . . . . . . . . . . . . . . . . . . 83
3.5.3 Application: The Quantum Fourier Transform . . . . . . . . 85
3.5.4 Application: Quantum Phase Estimation . . . . . . . . . . . . 87
3.5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4 Quantum Motion in Real Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.1 One Particle in One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.1.1 Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1.2 Computational Basis Functions . . . . . . . . . . . . . . . . . . . 94
4.1.3 The Position Operator . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.1.4 The Potential-Energy Operator . . . . . . . . . . . . . . . . . . . 100
4.1.5 The Kinetic-Energy Operator . . . . . . . . . . . . . . . . . . . . 101
4.1.6 The Momentum Operator . . . . . . . . . . . . . . . . . . . . . . . 101
4.1.7 Example: Gravity Well . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.1.8 The Wigner Quasi-probability Distribution . . . . . . . . . . 107
4.1.9 1D Dynamics in the Square Well . . . . . . . . . . . . . . . . . 109
4.1.10 1D Dynamics in a Time-Dependent Potential . . . . . . . . . 115
4.2 Many Particles in One Dimension: Dynamics with the Non-
linear Schrödinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.2.1 Imaginary-Time Propagation for Finding the Ground
State of the Non-linear Schrödinger Equation . . . . . . . . . 119
4.3 Several Particles in One Dimension: Interactions . . . . . . . . . . . . 122
4.3.1 Two Identical Particles in One Dimension with Contact
Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.3.2 Two Particles in One Dimension with Arbitrary
Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.4 One Particle in Several Dimensions . . . . . . . . . . . . . . . . . . . . . . 130
4.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5 Combining Spatial Motion and Spin . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.1 One Particle in 1D with Spin . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.1.1 Separable Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.1.2 Non-separable Hamiltonian . . . . . . . . . . . . . . . . . . . . . . 139
5.1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
xii Contents

5.2 One Particle in 2D with Spin: Rashba Coupling . . . . . . . . . . . . . 145


5.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.3 Phase-Space Dynamics in the Jaynes–Cummings Model . . . . . . . 148
5.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

List of Notebooks in Supplementary Material . . . . . . . . . . . . . . . . . . . . . 153


Solutions to Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Chapter 1
Wolfram Language Overview

The Wolfram language is a beautiful and handy tool for expressing a wide variety
of technical thoughts. Wolfram Mathematica is the software that implements the
Wolfram language. In this chapter, we have a look at the most central parts of this
language, without focusing on quantum mechanics yet. Students who are familiar
with the Wolfram language may skip this chapter; others may prefer alternative
introductions. Wolfram Research, the maker of Mathematica and the Wolfram
language, provides many resources for learning:
• https://www.wolfram.com/mathematica/resources/—an overview of Mathemat-
ica resources to learn at your own pace

Electronic supplementary material The online version of this chapter


(https://doi.org/10.1007/978-981-13-7588-0_1) contains supplementary material, which is
available to authorized users.

© Springer Nature Singapore Pte Ltd. 2020 1


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0_1
2 1 Wolfram Language Overview

• https://reference.wolfram.com/language/guide/LanguageOverview.html—an
overview of the Wolfram language
• https://www.wolfram.com/language/—the central resource for learning the
Wolfram language
• https://reference.wolfram.com/language/—the Mathematica documentation.

1.1 Introduction

Wolfram Mathematica is an interactive system for mathematical calculations. The


Mathematica system is composed of two main components: the front end, where
you write the input in the Wolfram language, give execution commands, and see the
output, and the kernel, which does the actual calculations.
front end kernel
“2+3”
In[1]:= 2+3

“5”
Out[1]= 5
This distinction is important to remember because the kernel remembers all the
operations in the order they are sent to it, and this order may have nothing to do with
the order in which these commands are displayed in the front end.
When you start Mathematica you see an empty “notebook” in which you can
write commands. These commands are written in a mixture of text and mathematical
symbols and structures, and it takes a bit of practice to master all the special input
commands. In the beginning you can write all your input in pure text mode, if you
prefer. Let’s try an example: add the numbers 2 + 3 by giving the input
2+3

and, with the cursor anywhere within the “cell” containing this text (look on the right
edge of the notebook to see cell limits and groupings) you press “shift-enter”. This
sends the contents of this cell to the kernel, which executes it and returns a result
that is displayed in the next cell:
5

If there are many input cells in a notebook, they only get executed in order if you
select “Evaluate Notebook” from the “Evaluation” menu; otherwise you can execute
the input cells in any order you wish by simply setting the cursor within one cell and
pressing “shift-enter”.
The definition of any function or symbol can be called up with the ? command:
?Factorial
n! gives the factorial of n. >>
1.1 Introduction 3

The arrow  that appears at the end of this informative text is a hyperlink into the
documentation, where (usually) instructive examples are presented.

1.1.1 Exercises

Do the following calculations in Mathematica, and try to understand their structure:


Q1.1 Calculate the numerical value of the Riemann zeta function ζ(3) with
N[Zeta[3]]

Q1.2 Square the previous result (%) with


%ˆ2

∞
Q1.3 Calculate 0 sin(x)e−x dx with

Integrate[Sin[x]*Exp[-x], {x, 0, Infinity}]

Q1.4 Calculate the first 1000 digits of π with


N[Pi, 1000]

or, equivalently, using the Greek symbol \[Pi]=Pi,


N[\[Pi], 1000]

Q1.5 Calculate the analytic and numeric values of the Clebsch–Gordan coefficient
100, 10; 200, −12|110, −2:
ClebschGordan[{100, 10}, {200, -12}, {110, -2}]

sin x
Q1.6 Calculate the limit lim x→0 x
with

Limit[Sin[x]/x, x -> 0]

Q1.7 Make a plot of the above function with


Plot[Sin[x]/x, {x, -20, 20}, PlotRange -> All]

Q1.8 Draw a Mandelbrot set with


F[c_, imax_] := Abs[NestWhile[#ˆ2+c&, 0., Abs[#]<=2&, 1, imax]] <= 2
With[{n = 100, imax = 1000},
Graphics[Raster[Table[Boole[!F[x+I*y,imax]],{y,-2,2,1/n},{x,-2,2,1/n}]]]]
4 1 Wolfram Language Overview

Q1.9 Do the same with a built-in function call:


MandelbrotSetPlot[]

1.2 Variables and Assignments

https://reference.wolfram.com/language/howto/WorkWithVariablesAndFunctions.
html.
Variables in the Wolfram language can be letters or words with uppercase or lowercase
letters, including Greek symbols. Assigning a value to a variable is done with the =
symbol,
a = 5
5

If you wish to suppress the output, then you must end the command with a semi-colon:
a = 5;

The variable name can then be used anywhere in an expression:


a + 2
7

1.2.1 Immediate and Delayed Assignments

https://reference.wolfram.com/language/tutorial/ImmediateAndDelayedDefinitions.
html.
Consider the two commands
a = RandomReal[]
0.38953
b := RandomReal[]

(your random number will be different).


The first statement a=… is an immediate assignment, which means that its right-
hand side is evaluated when you press shift-enter, produces a specific random value,
and is assigned to the variable a (and printed out). From now on, every time you use
the variable a, the exact same number will be substituted. In this sense, the variable
a contains the number 0.389 53 and has no memory of where it got this number from.
You can check the definition of a with ?a:
?a
Global‘a
a = 0.38953
1.2 Variables and Assignments 5

The definition b:=… is a delayed assignment, which means that when you press
shift-enter the right-hand side is not evaluated but merely stored as a definition of b.
From now on, every time you use the variable b, its right-hand-side definition will
be substituted and executed, resulting in a new random number each time. You can
check the definition of b with
?b
Global‘b
b := RandomReal[]

Let’s compare the repeated performance of a and b:


{a, b}
{0.38953, 0.76226}
{a, b}
{0.38953, 0.982921}
{a, b}
{0.38953, 0.516703}
{a, b}
{0.38953, 0.0865169}

If you are familiar with computer file systems, you can think of an immediate assign-
ments as a hard link (a direct link to a precomputed inode number) and a delayed
assignment as a soft link (symbolic link, textual instructions for how to find the linked
target).

1.2.2 Exercises

Q1.10 Explain the difference between


x = u + v

and
y := u + v

In particular, distinguish the cases where u and v are already defined before x
and y are defined, where they are defined only afterwards, and where they are
defined before but change values after the definition of x and y.

1.3 Four Kinds of Bracketing

https://reference.wolfram.com/language/tutorial/TheFourKindsOfBracketingIn
TheWolframLanguage.html.
There are four types of brackets in the Wolfram language:
• parentheses for grouping, for example in mathematical expressions:
6 1 Wolfram Language Overview

2*(3-7)

• square brackets for function calls:


Sin[0.2]

• curly braces for lists:


v = {a, b, c}

• double square brackets for indexing within lists: (see Sect. 1.10)
v[[2]]

1.4 Prefix and Postfix

There are several ways of evaluating a function call in the Wolfram language, and
we will see most of them in this book. As examples
√ of function calls with a single
argument, the main ways in which sin(0.2) and 2 + 3 can be calculated are
Standard notation (infinite precedence):
Sin[0.2]
0.198669
Sqrt[2+3]
Sqrt[5]

Prefix notation with @ (quite high precedence, higher than multiplication):


Sin @ 0.2
0.198669
Sqrt @ 2+3
3+Sqrt[2]

Notice how the high precedence of the @ operator effectively evaluates


(Sqrt@2)+3, not Sqrt@(2+3).
Postfix notation with // (quite low precedence, lower than addition):
0.2 //Sin
0.198669
2+3 //Sqrt
Sqrt[5]

Notice how the low precedence of the // operator effectively evaluates


(2+3)//N, not 2+(3//N).
1.4 Prefix and Postfix 7

Postfix notation is often used to transform the output of a calculation:


• Adding //N to the end of a command will convert the result to decimal represen-
tation, if possible.
• Adding //MatrixForm to the end of a matrix calculation will display the matrix
in a tabular form.
• Adding //Timing to the end of a calculation will display the result together
with the amount of time it took to execute.
If you are not sure which form is appropriate, for example if you don’t know the
precedence of the involved operations, then you should use the standard notation or
place parentheses where needed.

1.4.1 Exercises

Q1.11 Calculate the decimal value of Euler’s constant e (E) using standard, prefix,
and postfix notation.

1.5 Programming Constructs

When you program in the Wolfram language you can choose between a number of
different programming paradigms, and you can mix these as you like. Depending on
the chosen style, your program may run much faster or much slower.

1.5.1 Procedural Programming

https://reference.wolfram.com/language/guide/ProceduralProgramming.html.
A subset of the Wolfram language behaves very similarly to C, Python, Java, or
other procedural programming languages. Be very careful to distinguish semi-colons,
which separate commands within a single block of code, from commas, which sepa-
rate different code blocks!
Looping constructs behave like in common programming languages:
For[i = 1, i <= 5, i++,
Print[i]]
1
2
3
4
5
8 1 Wolfram Language Overview

Notice that i is now a globally defined variable, which you can check with
?i
Global‘i
i=6

The following, on the other hand, does not define the value of the variable j in
the global context:
Do[Print[j], {j, 1, 5}]
1
2
3
4
5
?j
Global‘j

In this sense, j is a local variable in the Do context. The following, again, defines
k as a global variable:
k = 1;
While[k <= 5,
Print[k];
k++]
1
2
3
4
5
?k
Global‘k
k=6

Conditional execution: The conditional statement If[condition, do-


when-true, do-when-false] follows the same logic as in every other
programming language,
If[5! > 100,
Print["larger"],
Print["smaller or equal"]]
larger

Notice that the If statement has a return value, similar to the “?” statement of
C and Java:
a = If[5! > 100, 1, -1]
1

Apart from true and false, Mathematica statements can have a third state:
unknown. For example, the comparison x==0 evaluates to neither true nor false
if x is not defined. The fourth slot in the If statement covers this case:
x == 0
x == 0
If[x == 0, "zero", "nonzero", "unknown"]
"unknown"
1.5 Programming Constructs 9

Modularity: code can use local variables within a module:


Module[{i},
i = 1;
While[i > 1/192, i = i/2];
i]
1/256

After the execution of this code, the variable i is still undefined in the global
context.

1.5.2 Exercises

Q1.12 Write a program that sums all integers from 123 to 9968. Use only local
variables.
Q1.13 Write a program that sums consecutive integers, starting from 123, until the
sum is larger than 10 000. Return the largest integer in this sum. Use only
local variables.

1.5.3 Functional Programming

https://reference.wolfram.com/language/guide/FunctionalProgramming.html.
Functional programming is a very powerful programming technique that can give
large speedups in computation because it can often be parallelized over many com-
puters or CPUs. In our context, we often use lists (vectors or matrices, see Sect. 1.10)
and want to apply functions to each one of their elements.
The most common functional programming constructs are
Anonymous functions1: you can quickly define a function with parameters #1, #2,
#3, etc., terminated with the & symbol: (the symbol # is an abbreviation for #1)
f = #ˆ2 &;
f[7]
49
g = #1-#2 &;
g[88, 9]
79

Functions and anonymous functions, for example #ˆ2&, are first-class objects2
just like numbers, matrices, etc. You can assign them to variables, as in In[48]
and In[50] above; you can also use them directly as arguments to other func-
tions, as for example in In[55] below; or you can use them as return values of
other functions, as in In[478].

1 See https://en.wikipedia.org/wiki/Anonymous_functions.
2 See https://en.wikipedia.org/wiki/First-class_citizen.
10 1 Wolfram Language Overview

The symbol ## stands for the sequence of all parameters of a function:


f = {1,2,3,##,4,5,6} &;
f[7,a,c]
{1,2,3,7,a,c,4,5,6}

The symbol #0 stands for the function itself. This is useful for defining recursive
anonymous functions (see item 7 of Sect. 1.9).
Map /@: apply a function to each element of a list.
a = {1, 2, 3, 4, 5, 6, 7, 8};
Map[#ˆ2 &, a]
{1, 4, 9, 16, 25, 36, 49, 64}
#ˆ2 & /@ a
{1, 4, 9, 16, 25, 36, 49, 64}

Notice how we have used the anonymous function #ˆ2& here without ever giving
it a name.
Apply @@: apply a function to an entire list and generate a single result. For example,
applying Plus to a list will calculate the sum of the list elements; applying
Times will calculate their product. This operation is also known as reduce.3
a = {1, 2, 3, 4, 5, 6, 7, 8};
Apply[Plus, a]
36
Plus @@ a
36
Apply[Times, a]
40320
Times @@ a
40320

1.5.4 Exercises

Q1.14 Write an anonymous function with three arguments that returns the product
of these arguments.
Q1.15 Given a list
a = {0.1, 0.9, 2.25, -1.9};

calculate x → sin(x 2 ) for each element of a using the Map operation.


Q1.16 Calculate the sum of all the results of Q1.15.

3 See https://en.wikipedia.org/wiki/MapReduce.
1.6 Function Definitions 11

1.6 Function Definitions

https://reference.wolfram.com/language/tutorial/DefiningFunctions.html.
Functions are assignments (see Sect. 1.2) with parameters. As for parameter-free
assignments, we distinguish between immediate and delayed function definitions.

1.6.1 Immediate Function Definitions

We start with immediate definitions: a function f (x) = sin(x)/x is defined with


f[x_] = Sin[x]/x;

Notice the underscore _ symbol after the variable name x: this underscore indicates
a pattern (denoted by _) named x, not the symbol x itself. Whenever this function
f is called with any parameter value, this parameter value is inserted wherever x
appears on the right-hand side, as is expected for a function definition. You can find
out how f is defined with the ? operator:
?f
Global‘f
f[x_] = Sin[x]/x

and you can ask for a function evaluation with


f[0.3]
0.985067
f[0]
Power: Infinite expression 1/0 encountered.
Infinity: Indeterminate expression 0 ComplexInfinity encountered.
Indeterminate

Apparently the function cannot be evaluated for x = 0. We can fix this by defining
a special function value:
f[0] = 1;

Notice that there is no underscore on the left-hand side, so there is no pattern definition.
The full definition of f is now
?f
Global‘f
f[0] = 1
f[x_] = Sin[x]/x

If the function f is called, then these definitions are checked in order of appearance
in this list. For example, if we ask for f[0], then the first entry matches and the
value 1 is returned. If we ask for f[0.3], then the first entry does not match (since
0 and 0.3 are not strictly equal), but the second entry matches since anything can be
plugged into the pattern named x. The result is sin(0.3)/0.3 = 0.985 067, which is
what we expected.
12 1 Wolfram Language Overview

1.6.2 Delayed Function Definitions

Just like with delayed assignments (Sect. 1.2.1), we can define delayed function calls.
For comparison, we define the two functions
g1[x_] = x + RandomReal[]
0.949868 + x
g2[x_] := x + RandomReal[]

Check their effective definitions with ?g1 and ?g2, and notice that the definition of
g1 was executed immediately when you pressed shift-enter and its result assigned
to the function g1 (with a specific value for the random number, as printed out),
whereas the definition of g2 was left unevaluated and is executed each time anew
when you use the function g2:
{g1[2], g2[2]}
{2.94987, 2.33811}
{g1[2], g2[2]}
{2.94987, 2.96273}
{g1[2], g2[2]}
{2.94987, 2.18215}

1.6.3 Memoization: Functions that Remember Their Results

https://reference.wolfram.com/language/tutorial/FunctionsThatRememberValues
TheyHaveFound.html.
When we define a function that takes a long time to evaluate, we may wish to store its
output values such that if the function is called with identical parameter values again,
then we do not need to re-evaluate the function but can simply remember the already
calculated result.4 We can make use of the interplay between patterns and values,
and between immediate and delayed assignments, to construct such a function that
remembers its values from previous function calls.
See if you can understand the following definition.
F[x_] := F[x] = xˆ7

If you ask for ?F then you will simply see this definition. Now call
F[2]
128

4 This is technically called memoization: https://en.wikipedia.org/wiki/Memoization. A similar func-

tionality can be achieved with Mathematica’s Once operator, which allows fine-grained control over
the storage location, conditions, and duration of the persistent result.
1.6 Function Definitions 13

and ask for ?F again. You see that the specific immediate definition of F[2] = 128
was added to the list of definitions, with the evaluated result 128 (which may have
taken a long time to calculate in a more complicated function). The next time you
call F[2], the specific definition of F[2] will be found earlier in the definitions
list than the general definition F[x_] and therefore the precomputed value of F[2]
will be returned.
When you re-define the function F after making modifications to it, you must
clear the associated remembered values in order for them to be re-computed at the
next occasion. It is a good practice to prefix every definition of a memoizing function
with a Clear command:
Clear[F];
F[x_] := F[x] = xˆ9

For function evaluations that take even longer, we may wish to save the accumulated
results to a file in order to read them back at a later time. For the above example, we
save all definitions associated with the symbol F to the file Fdef.mx with
SetDirectory[NotebookDirectory[]];
DumpSave["Fdef.mx", F];

The next time we wish to continue the calculation, we define the function F and load
all of its already known values with
Clear[F];
F[x_] := F[x] = xˆ9
SetDirectory[NotebookDirectory[]];
Get["Fdef.mx"];

1.6.4 Functions with Conditions on Their Arguments

https://reference.wolfram.com/language/guide/Patterns.html.
The Wolfram language contains a powerful pattern language that we can use to define
functions that only accept certain arguments. For function definitions we will use
three main types of patterns:
Anything-goes: A function defined as
f[x_] := xˆ2

can be called with any sort of arguments, since the pattern x_ can match anything:

f[4]
16
f[2.3-0.1I]
5.28-0.46I
f[{1,2,3,4}]
{1,4,9,16}
f[yˆ2]
yˆ4
14 1 Wolfram Language Overview

Type-restricted: A pattern like x_Integer will only match arguments of integer


type. If the function is called with a non-matching argument, then the function
is not executed:
g[x_Integer] := x-3
g[x_Rational] := x
g[x_Real] := x+3
g[x_Complex] := 0
g[7]
4
g[7.1]
10.1
g[2/3]
2/3
g[2+3I]
0
g[x]
g[x]

Conditional: Complicated conditions can be specified with the /; operator:


h[x_/;x<=3] := xˆ2
h[x_/;x>3] := x-11
h[2]
4
h[5]
-6

Conditions involving a single function call returning a Boolean value, for exam-
ple x_/;PrimeQ[x], can be abbreviated with x_?PrimeQ. Other useful
“question” functions are IntegerQ, NumericQ, EvenQ, OddQ, etc. See
https://reference.wolfram.com/language/tutorial/PuttingConstraintsOnPatterns.
html for more information.

1.6.5 Functions with Optional Arguments

https://reference.wolfram.com/language/tutorial/OptionalAndDefaultArguments.
html.
Function arguments can be optional, indicated with the : symbol. For each optional
argument, a default value must be defined that is used whenever the function is called
without the argument specified. The optional arguments must be the last ones in the
arguments list. There can be arbitrarily many optional arguments.
As an example, the function
f[a_, b_:5] = {a,b}

uses the default value b = 5 whenever it is called with only one argument:
f[7]
{7,5}
1.6 Function Definitions 15

When called with two arguments, the second argument overrides the default value
for b:
f[7,2]
{7,2}

1.7 Rules and Replacements

https://reference.wolfram.com/language/tutorial/ApplyingTransformationRules.html.
We will often use replacement rules in the calculations of this course. A replacement
rule is an instruction x -> y that replaces any occurrence of the symbol (or pattern)
x with the symbol y. We apply such a rule with the /. or ReplaceAll operator:
a + 2 /. a -> 7
9
ReplaceAll[a + 2, a -> 7]
9
c - d /. {c -> 2, d -> 8}
-6
ReplaceAll[c - d, {c -> 2, d -> 8}]
-6

Rules can contain patterns, in the same way as we use them for defining the parameters
of functions (Sect. 1.6):
a + b /. x_ -> xˆ2
(a + b)ˆ2

Notice that here the pattern x_ matched the entire expression a + b, not the subex-
pressions a and b. To be more specific and do the replacement only at level 1 of this
expression, we can write
Replace[a + b, x_ -> xˆ2, {1}]
aˆ2 + bˆ2

Doing the replacement at level 0 gives again


Replace[a + b, x_ -> xˆ2, {0}]
(a + b)ˆ2

At other instances, restricted patterns can be used to achieve a desired result:


a + 2 /. x_Integer -> xˆ2
4 + a

Many Wolfram language functions return their results as replacement rules. For
example, the result of solving an equation is a list of rules:
16 1 Wolfram Language Overview

s = Solve[xˆ2 - 4 == 0, x]
{{x -> -2}, {x -> 2}}

We can make use of these solutions with the replacement operator /., for example
to check the solutions:
xˆ2 - 4 /. s
{0, 0}

1.7.1 Immediate and Delayed Rules

Just as for assignments (Sect. 1.2.1) and functions (Sect. 1.6), rules can be immediate
or delayed. In an immediate rule of the form x -> y, the value of y is calculated
once upon defining the rule. In a delayed rule of the form x :> y, the value of y
is re-calculated every time the rule is applied. This can be important when the rule
is supposed to perform an action. Here is an example: we replace c by f with
{a, b, c, d, c, a, c, b} /. c -> f
{a, b, f, d, f, a, f, b}

We do the same while counting the number of replacements with


i = 0;
{a, b, c, d, c, a, c, b} /. c :> (i++; Echo[i, "replacement "]; f)
» replacement 1
» replacement 2
» replacement 3
{a, b, f, d, f, a, f, b}
i
3

In this case, the delayed rule c :> (i++; Echo[i, "replacement "]; f)
is a list of commands enclosed in parentheses () and separated by semicolons. The
first command increments the replacement counter i, the second prints a running com-
mentary (see Sect. 1.8), and the third gives the result of the replacement. The result of
such a list of commands is always the last expression, in this case f.

1.7.2 Repeated Rule Replacement

The /. operator uses the given list of replacement rules only once:
a /. {a -> b, b -> c}
b
1.7 Rules and Replacements 17

The //. operator, on the other hand, uses the replacement rules repeatedly until the
result no longer changes (in this case, after two applications):
a //. {a -> b, b -> c}
c

1.8 Debugging and Finding Out How Mathematica


Expressions are Evaluated

https://reference.wolfram.com/language/guide/TuningAndDebugging.html.
https://www.wolfram.com/language/elementary-introduction/2nd-ed/47-
debugging-your-code.html.
The precise way Mathematica evaluates an expression depends on many details and
can become very complicated.5 For finding out more about particular cases, especially
when they aren’t evaluated in the way that you were expecting, the Trace command
may be useful. This command gives a list of all intermediate results, which helps in
understanding the way that Mathematica arrives at its output:
Trace[x - 3x + 1]
{{-(3x), -3x, -3x}, x-3x+1, 1-3x+x, 1-2x}
x = 5;
Trace[x - 3x + 1]
{{x, 5}, {{{x, 5}, 3\[Times]5, 15}, -15, -15}, 5-15+1, -9}

A more verbose trace is achieved with TracePrint:


TracePrint[y - 3y + 1]
y-3 y+1
Plus
y
-(3 y)
Times
-1
3 y
Times
3
y
-3 y
-3 y
Times
-3
y
1
y-3 y+1
1-3 y+y
1-2 y
Plus
1
-2 y
Times
-2
y
1 - 2 y

5 See https://reference.wolfram.com/language/tutorial/EvaluationOfExpressionsOverview.html.
18 1 Wolfram Language Overview

It is very useful to print out intermediate results in a long calculation via the Echo
command, particularly during code development. Calling Echo[x,label] prints
x with the given label, and returns x; in this way, the Echo command can be simply
added to a calculation without perturbing it:
Table[Echo[i!, "building table: "], {i, 3}]
» building table: 1
» building table: 2
» building table: 6
{1, 2, 6}

In order to run your code “cleanly” after debugging it with Echo, you can either
remove all instances of Echo, or you can re-define Echo to do nothing:
Unprotect[Echo]; Echo = #1 &;

Re-running the code of In[125] now gives just the result:


Table[Echo[i!, "building table: "], {i, 3}]
{1, 2, 6}

Finally, it can be very insightful to study the “full form” of expressions, especially
when it does not match a pattern that you were expecting to match. For example, the
internal full form of ratios depends strongly on the type of numerator or denominator:
FullForm[a/b]
Times[a, Power[b, -1]]
FullForm[1/2]
Rational[1, 2]
FullForm[a/2]
Times[Rational[1, 2], a]
FullForm[1/b]
Power[b, -1]

1.8.1 Exercises

Q1.17 Why do we need the Unprotect command in In[126]?


Q1.18 To replace a ratio a/b by the function ratio[a,b], we could enter
a/b /. {x_/y_ -> ratio[x,y]}
ratio[a,b]

Why does this not work to replace the ratio 2/3 by the function ratio[2,3]?
2/3 /. {x_/y_ -> ratio[x,y]}
2/3
1.9 Many Ways to Define the Factorial Function 19

1.9 Many Ways to Define the Factorial Function


[Supplementary Material 1]

The following list of definitions of the factorial function is based on the Wolfram demo
https://www.wolfram.com/training/videos/EDU002/. Try to understand as many of
these definitions as possible. What this means in practice is that for most problems
you can pick the programming paradigm that suits your way of thinking best, instead
of being forced into one way or another. The different paradigms have different
advantages and disadvantages, which may become clearer to you as you become
more familiar with them.
You must call Clear[f] between different definitions!
1. Define the function f to be an alias of the built-in function Factorial: calling
f[5] is now strictly the same thing as calling Factorial[5], which in turn
is the same thing as calling 5!.
f = Factorial;

2. A call to f is forwarded to the function “!”: calling f[5] triggers the evaluation
of 5!.
f[n_] := n!

3. Use the mathematical definition n! = (n + 1):


f[n_] := Gamma[n+1]

n
4. Use the mathematical definition n! = i=1 i:
f[n_] := Product[i, {i,n}]

5. Rule-based recursion, using the Wolfram language’s built-in pattern-matching


capabilities: calling f[5] leads to a call of f[4], which leads to a call of f[3],
and so on until f[1] immediately returns the result 1, after which the program
unrolls the recursion stack and does the necessary multiplications:
f[1] = 1;
f[n_] := n*f[n-1]

6. The same recursion but without rules (no pattern-matching):


f[n_] := If[n == 1, 1, n*f[n-1]]

7. Define the same recursion through functional programming: f is a function


whose name is #0 and whose first (and only) argument is #1. The end of the
function definition is marked with &.
f = If[#1 == 1, 1, #1*#0[#1-1]]&;

8. procedural programming with a Do loop:


20 1 Wolfram Language Overview

f[n_] := Module[{t = 1},


Do[t = t*i, {i, n}];
t]

9. procedural programming with a For loop: this is how you would compute factori-
als in procedural programming languages like C. It is a very precise step-by-step
prescription of how exactly the computer is supposed to do the calculation.
f[n_] := Module[{t = 1, i},
For[i = 1, i <= n, i++,
t *= i];
t]

10. Make a list of the numbers 1 . . . n (with Range[n]) and then multiply them
together at once, by applying the function Times to this list. This is the most ele-
gant way of multiplying all these numbers together, because both the generation
of the list of integers and their multiplication are done with internally optimized
methods. The programmer merely specifies what he would like the computer to
do, and not how it is to be done.
f[n_] := Times @@ Range[n]

11. Make a list of the numbers 1 . . . n and then multiply them together one after the
other.
f[n_] := Fold[Times, 1, Range[n]]

12. Functional programming: make a list of functions {t → t, t → 2t, t → 3t, . . . ,


t → nt}, and then, starting with the number 1, apply each of these functions
once.
f[n_] := Fold[#2[#1]&, 1, Array[Function[t, #1*t]&, n]]

13. Construct a list whose length we know to be n!:


f[n_] := Length[Permutations[Range[n]]]

14. Use repeated pattern-based replacement (//., see Sect. 1.7.2) to find the facto-
rial: start with the object {1, n} and apply the given rule until the result no longer
changes because the pattern no longer matches.
f[n_] := First[{1,n} //. {a_,b_/;b>0} :> {b*a,b-1}]

15. Build a string whose length is n!:


f[n_] := StringLength[Fold[StringJoin[Table[#1, {#2}]]&, "A", Range[n]]]

16. Starting from the number n, repeatedly replace each number m by a list contain-
ing m times the number m − 1. At the end, we have a list of lists of … of lists
1.9 Many Ways to Define the Factorial Function 21

that overall contains n! times the number 1. Flatten it out and count the number
of elements.
f[n_] := Length[Flatten[n //. m_ /; m > 1 :> Table[m - 1, {m}]]]

dn (x n )
17. Analytically calculate dx n
, the nth derivative of x n :
f[n_] := D[xˆn, {x, n}]

1.9.1 Exercises

Q1.19 In which ones of the definitions of Sect. 1.9 can you replace a delayed assign-
ment (:=) with an immediate assignment (=) or vice-versa? What changes
if you do this replacement? (see Sect. 1.2.1).
Q1.20 In which ones of the definitions of Sect. 1.9 can you replace a delayed rule
(:>) with an immediate rule (->) or vice-versa? What changes if you do this
replacement? (see Sect. 1.7.1).
Q1.21 Can you use the trick of Sect. 1.6.3 for any of the definitions of Sect. 1.9?
Q1.22 Write two very different programs that calculate the first hundred Fibonacci
numbers {1, 1, 2, 3, 5, 8, . . . }, where each number is the sum of the two pre-
ceding ones.

1.10 Vectors, Matrices, Tensors

In this book we will use vectors and matrices to represent quantum states and opera-
tors, respectively.

1.10.1 Vectors

https://reference.wolfram.com/language/tutorial/VectorOperations.html.
In the Wolfram language, vectors are represented as lists of objects, for example lists
of real or complex numbers:
v = {1,2,3,2,1,7+I};
Length[v]
6

You can access any element by its index, using double brackets, with the first element
having index 1 (as in Fortran or Matlab), not 0 (as in C, Java, or Python):
22 1 Wolfram Language Overview

v[[4]]
2

Negative indices count from the end of the list:


v[[-1]]
7+I

Lists can contain arbitrary elements (for example strings, graphics, expressions, lists,
functions, etc.).

If two vectors a and b of equal length are defined, then their scalar product a · b
is calculated with
a = {0.1, 0.2, 0.3 + 2I};
b = {-0.27I, 0, 2};
Conjugate[a].b
0.6 - 4.027I

Vectors of equal length can be element-wise added, subtracted, multiplied etc. with
the usual operators:
a + b
{0.1 - 0.27I, 0.2, 2.3 + 2.I}
2 a
{0.2, 0.4, 0.6 + 4.I}

1.10.2 Matrices

https://reference.wolfram.com/language/tutorial/BasicMatrixOperations.html.
Matrices are lists of lists, where each sublist describes a row of the matrix:
M = {{3,2,7},{1,1,2},{0,-1,5},{2,2,1}};
Dimensions[M]
{4, 3}

In this example, M is a 4 × 3 matrix. Pretty-printing a matrix is done with the Matrix-


Form wrapper,
MatrixForm[M]

Accessing matrix elements is analogous to accessing vector elements:


M[[1,3]]
7
M[[2]]
{1, 1, 2}
1.10 Vectors, Matrices, Tensors 23

Matrices can be transposed with Transpose[M].


Matrix–vector and matrix–matrix multiplications are done with the . operator:
M.a
{2.8 + 14.I, 0.9 + 4.I, 1.3 + 10.I, 0.9 + 2.I}

1.10.3 Sparse Vectors and Matrices

https://reference.wolfram.com/language/guide/SparseArrays.html.
Large matrices can take up enormous amounts of computer memory. In practical
situations we are often dealing with matrices that are “sparse”, meaning that most
of their entries are zero. A much more efficient way of storing them is therefore as
a list of only their nonzero elements, using the SparseArray function.
A given vector or matrix is converted to sparse representation with
M = {{0,3,0,0,0,0,0,0,0,0},
{0,0,0,-1,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0}};
Ms = SparseArray[M]
SparseArray[<2>, {3, 10}]

where the output shows that Ms is a 3 × 10 sparse matrix with 2 non-zero entries.
We could have entered this matrix more easily by giving the list of non-zero entries,
Ms = SparseArray[{{1, 2} -> 3, {2, 4} -> -1}, {3, 10}];

which we can find out from


ArrayRules[Ms]
{{1, 2} -> 3, {2, 4} -> -1, {_, _} -> 0}

which includes a specification of the default pattern {_,_}. This sparse array is
converted back into a normal array with
Normal[Ms]
{{0,3,0,0,0,0,0,0,0,0},
{0,0,0,-1,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0}}

Sparse arrays and vectors can be used just like full arrays and vectors (they are
internally converted automatically whenever necessary). But for some linear algebra
operations they can be much more efficient. A matrix multiplication of two sparse
matrices, for example, scales only with the number of non-zero elements of the
matrices, not with their size.
24 1 Wolfram Language Overview

1.10.4 Matrix Diagonalization

“Solving” the time-independent Schrödinger equation, as we will be doing in Sect. 2.2,


involves calculating the eigenvalues and eigenvectors of Hermitian6 matrices.
In what follows it is assumed that we have defined H as a Hermitian matrix. As
an example we will use
H = {{0, 0.3, I, 0},
{0.3, 1, 0, 0},
{-I, 0, 1, -0.2},
{0, 0, -0.2, 3}};

Eigenvalues
The eigenvalues of a matrix H are computed with
Eigenvalues[H]
{3.0237, 1.63842, 0.998322, -0.660442}

Notice that these eigenvalues (energy values) are not necessarily sorted, even though
in this example they appear in descending order. For a sorted list we use
Sort[Eigenvalues[H]]
{-0.660442, 0.998322, 1.63842, 3.0237}

For very large matrices H, and in particular for sparse matrices (see Sect. 1.10.3), it
is computationally inefficient to calculate all eigenvalues. Further, we are often only
interested in the lowest-energy eigenvalues and eigenvectors. There are very efficient
algorithms for calculating extremal eigenvalues,7 which can be used by specifying
options to the Eigenvalues function: if we only need the largest two eigenvalue,
for example, we call
Eigenvalues[H, 2, Method -> {"Arnoldi",
"Criteria" -> "RealPart",
MaxIterations -> 10ˆ6}]
{3.0237, 1.63842}

There is no direct way to calculate the smallest eigenvalues; but since the smallest
eigenvalues of H are the largest eigenvalues of -H we can use
-Eigenvalues[-H, 2, Method -> {"Arnoldi",
"Criteria" -> "RealPart",
MaxIterations -> 10ˆ6}]
{0.998322, -0.660442}

Eigenvectors
The eigenvectors of a matrix H are computed with

6A complex matrix H is Hermitian if H = H † . See https://en.wikipedia.org/wiki/


Hermitian_matrix.
7 Arnoldi–Lanczos algorithm: https://en.wikipedia.org/wiki/Lanczos_algorithm.
1.10 Vectors, Matrices, Tensors 25

Eigenvectors[H]
{{0.-0.0394613I, 0.-0.00584989I, -0.117564, 0.992264},
{0.+0.533642I, 0.+0.250762I, 0.799103, 0.117379},
{0.-0.0053472I, 0.+0.955923I, -0.292115, -0.029187},
{0.-0.844772I, 0.+0.152629I, 0.512134, 0.0279821}}

In this case of a 4 × 4 matrix, this generates a list of four ortho-normal 4-vectors.


Usually we are interested in calculating the eigenvalues and eigenvectors at the
same time:
Eigensystem[H]
{{3.0237, 1.63842, 0.998322, -0.660442},
{{0.-0.0394613I, 0.-0.00584989I, -0.117564, 0.992264},
{0.+0.533642I, 0.+0.250762I, 0.799103, 0.117379},
{0.-0.0053472I, 0.+0.955923I, -0.292115, -0.029187},
{0.-0.844772I, 0.+0.152629I, 0.512134, 0.0279821}}}

which generates a list containing the eigenvalues and the eigenvectors. The ordering
of the elements in the eigenvalues list corresponds to the ordering in the eigenvectors
list; but the sorting order is generally undefined. To generate a list of (eigenvalue,
eigenvector) pairs in ascending order of eigenvalues, we calculate
Sort[Transpose[Eigensystem[H]]]
{{-0.660442, {0.-0.844772I, 0.+0.152629I, 0.512134, 0.0279821}},
{0.998322, {0.-0.0053472I, 0.+0.955923I, -0.292115, -0.029187}},
{1.63842, {0.+0.533642I, 0.+0.250762I, 0.799103, 0.117379}},
{3.0237, {0.-0.0394613I, 0.-0.00584989I, -0.117564, 0.992264}}}

To generate a sorted list of eigenvalues eval and a corresponding list of eigenvectors


evec we calculate
{eval,evec} = Transpose[Sort[Transpose[Eigensystem[H]]]];
eval
{-0.660442, 0.998322, 1.63842, 3.0237}
evec
{{0.-0.844772I, 0.+0.152629I, 0.512134, 0.0279821},
{0.-0.0053472I, 0.+0.955923I, -0.292115, -0.029187},
{0.+0.533642I, 0.+0.250762I, 0.799103, 0.117379},
{0.-0.0394613I, 0.-0.00584989I, -0.117564, 0.992264}}

The trick with calculating only the lowest-energy eigenvalues can be applied to
eigenvalue calculations as well, since the eigenvectors of -H and H are the same:
{eval,evec} = Transpose[Sort[Transpose[-Eigensystem[-H, 2,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}]]]];
eval
{-0.660442, 0.998322}
evec
{{-0.733656+0.418794I, 0.132553-0.0756656I,
-0.253889-0.444771I, -0.0138721-0.0243015 I},
{-0.000575666-0.00531612I, 0.102912+0.950367I,
-0.290417+0.0314484I, -0.0290174+0.0031422I}}

Notice that these eigenvectors are not the same as those calculated further above!
This difference is due to arbitrary multiplications of the eigenvectors with phase
factors eiϕ .
To check that the vectors in evec are ortho-normalized, we calculate the matrix
product
26 1 Wolfram Language Overview

Conjugate[evec].Transpose[evec] //Chop //MatrixForm

and verify that the matrix of scalar products is indeed equal to the unit matrix.
To check that the vectors in evec are indeed eigenvectors of H, we calculate all
matrix elements of H in this basis of eigenvectors:
Conjugate[evec].H.Transpose[evec] //Chop //MatrixForm

and verify that the result is a diagonal matrix whose diagonal elements are exactly
the eigenvalues eval.

1.10.5 Tensor Operations

https://reference.wolfram.com/language/guide/RearrangingAndRestructuringLists.
html.
We have seen above that in the Wolfram language, a vector is a list of numbers
(Sect. 1.10.1) and a matrix is a list of lists of numbers (Sect. 1.10.2). Higher-rank
tensors are correspondingly represented as lists of lists of …of lists of numbers. In
this section we describe general tools for working with tensors, which extend the
methods used for vectors and matrices. See Sect. 2.4.3 for a concrete application
of higher-rank tensors. We note that the sparse techniques of Sect. 1.10.3 naturally
extend to higher-rank tensors.
As an example, we start by defining a list (i.e., a vector) containing 24 elements:
v = Range[24]
{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24}
Dimensions[v]
{24}

We have chosen the elements in this vector to indicate their position in order to make
the following transformations easier to understand.
Reshaping
We reshape the list v into a 2 × 3 × 4 tensor with
t = ArrayReshape[v, {2,3,4}]
{{{1,2,3,4},{5,6,7,8},{9,10,11,12}},
{{13,14,15,16},{17,18,19,20},{21,22,23,24}}}
Dimensions[t]
{2, 3, 4}

Notice that the order of the elements has not changed; but they are now arranged as a
list of lists of lists of numbers. Alternatively, we could reshape v into a 2 × 2 × 3 × 2
tensor with
1.10 Vectors, Matrices, Tensors 27

u = ArrayReshape[v, {2,2,3,2}]
{{{{1,2},{3,4},{5,6}},{{7,8},{9,10},{11,12}}},
{{{13,14},{15,16},{17,18}},{{19,20},{21,22},{23,24}}}}
Dimensions[u]
{2, 2, 3, 2}

Flattening
The reverse operation is called flattening:
Flatten[t] == Flatten[u] == v
True

Tensor flattening can be applied more specifically, without flattening the entire struc-
ture into a single list. As an example, in u we flatten indices 1&2 together and
indices 3&4 together, to find a 4 × 6 matrix that we could have calculated directly
with ArrayReshape[v, {4,6}]:
Flatten[u, {{1,2}, {3,4}}]
{{1,2,3,4,5,6},{7,8,9,10,11,12},{13,14,15,16,17,18},{19,20,21,22,23,24}}
% == ArrayReshape[v, {4,6}]
True

We sometimes use the ArrayFlatten command, which is just a special case of


Flatten with fixed arguments, flattening indices 1&3 together and indices 2&4
together:
ArrayFlatten[u] == Flatten[u, {{1,3}, {2,4}}]
True

Transposing
A tensor transposition is a re-ordering of a tensor’s indices. For example,
tt = Transpose[t, {2,3,1}]
{{{1,5,9},{13,17,21}},{{2,6,10},{14,18,22}},
{{3,7,11},{15,19,23}},{{4,8,12},{16,20,24}}}
Dimensions[tt]
{4, 2, 3}

generates a 4 × 2 × 3-tensor tt, where the first index of t is the second index of
tt, the second index of t is the third index of tt, and the third index of t is the
first index of tt; this order of index shuffling is given in the parameter list {2,3,1}
meaning {1st, 2nd, 3rd} → {2nd, 3rd, 1st}. More explicitly,
Table[t[[i,j,k]] == tt[[k,i,j]], {i,2}, {j,3}, {k,4}]
{{{True,True,True,True},{True,True,True,True},
{True,True,True,True}},{{True,True,True,True},
{True,True,True,True},{True,True,True,True}}}

Contracting
As a generalization of a scalar product, indices of equal length of a tensor can be
contracted. This is the operation of summing over an index that appears twice in
28 1 Wolfram Language Overview

the list of indices. For example, contracting indices 2 and  5 of the rank-6 tensor
X a,b,c,d,e, f yields the rank-4 tensor with elements Ya,c,d, f = i X a,i,c,d,i, f .
For example, we can either contract indices 1&2 in u, or indices 1&4, or indices
2&4, since they are all of length 2:
TensorContract[u, {1, 2}]
{{20, 22}, {24, 26}, {28, 30}}
TensorContract[u, {1, 4}]
{{15, 19, 23}, {27, 31, 35}}
TensorContract[u, {2, 4}]
{{9, 13, 17}, {33, 37, 41}}

1.10.6 Exercises

Q1.23 Calculate the eigenvalues and eigenvectors of the Pauli matrices:


https://en.wikipedia.org/wiki/Pauli_matrices
Are the eigenvectors ortho-normal? If not, find an ortho-normal set.
Q1.24 After In[203], try to contract indices 3&4 in the tensor u. What went
wrong?

1.11 Complex Numbers

By default all variables in the Wolfram language are assumed to be complex numbers,
unless otherwise specified. All mathematical functions can take complex numbers
as their input, often by analytic continuation.8
The most commonly used functions on complex numbers are Conjugate, Re,
Im, Abs, and Arg. When applied to numerical arguments they do what we expect:
Conjugate[2 + 3I]
2 - 3I
Im[0.7]
0

When applied to variable arguments, however, they fail and frustrate the inexperi-
enced user:
Conjugate[x+I*y]
Conjugate[x] - I*Conjugate[y]
Im[a]
Im[a]

This behavior is due to Mathematica not knowing that x, y, and a in these examples
are real-valued. There are several ways around this, all involving assumptions. The

8 See https://en.wikipedia.org/wiki/Analytic_continuation.
1.11 Complex Numbers 29

first is to use the ComplexExpand function, which assumes that all variables are
real:
Conjugate[x+I*y] //ComplexExpand
x - I*y
Im[a] //ComplexExpand
0

The second is to use explicit local assumptions, which may be more specific than
assuming that all variables are real-valued:
Assuming[Element[x, Reals] && Element[y, Reals],
Conjugate[x + I*y] //FullSimplify]
x - I*y
Assuming[Element[a, Reals], Im[a] //FullSimplify]
0

The third is to use global assumptions (in general, global system variables start with
the $ sign):
$Assumptions = Element[x, Reals] && Element[y, Reals] && Element[a, Reals];
Conjugate[x+I*y] //FullSimplify
x - I*y
Im[a] //FullSimplify
0

1.12 Units

https://reference.wolfram.com/language/tutorial/UnitsOverview.html.
The Wolfram language is capable of dealing with units of measure, as required for
physical calculations. For example, we can make the assignment
s = Quantity[3, "m"];

to specify that s should be three meters. A large number of units can be used, as well
as physical constants:
kB = Quantity["BoltzmannConstant"];

will define the variable kB to be Boltzmann’s constant. Take note that complicated or
slightly unusual quantities are evaluated through the online service Wolfram Alpha ,
which means that you need an internet connection in order to evaluate them. For
this and other reasons, unit calculations are very slow and to be avoided whenever
possible.
If you are unsure whether your expression has been interpreted correctly, the full
internal form
30 1 Wolfram Language Overview

FullForm[kB]
Quantity[1, "BoltzmannConstant"]

usually helps. Alternatively, converting to SI units can often clarify a definition:


UnitConvert[kB]
Quantity[1.38065*10ˆ-23, "kg mˆ2/(sˆ2 K)"]

In principle, we can use this mechanism to do all the calculations in this book with
units; however, for the sake of generality (as many other computer programs cannot
deal with units) when we do numerical calculations, we will convert every quantity
into dimensionless form in what follows.
In order to eliminate units from a calculation, we must determine a set of units
in which to express the relevant quantities. This means that every physical quantity
x is expressed as the product of a unit and a dimensionless multiplier. The actual
calculations are performed only with the dimensionless multipliers. A smart choice
of units can help in implementing a problem.
As an example we calculate the acceleration of an A380 airplane (m = 560 t) due
to its jet engines (F = 4 × 311 kN). The easiest way is to use the Wolfram language’s
built-in unit processing:
F = Quantity[4*311, "kN"];
m = Quantity[560, "t"];
a = UnitConvert[F/m, "m/sˆ2"] //N
2.22143 m/sˆ2

This method is, however, much slower than using purely numerical calculations, and
furthermore cannot be generalized to matrix and vector algebra.
Now we do the same calculation with dimensionless multipliers only. For this, we
first set up a consistent set of units, for example the SI units:
ForceUnit = Quantity["Newtons"];
MassUnit = Quantity["Kilograms"];
AccelerationUnit = UnitConvert[ForceUnit/MassUnit]
1 m/sˆ2

It is important that these units are consistent with each other, i.e., that the product of
the mass and acceleration units gives the force unit. The calculation is now effected
with a simple numerical division a = F/m:
F = Quantity[4*311, "kN"] / ForceUnit
1244000
m = Quantity[560, "t"] / MassUnit
560000
a = F/m //N
2.22143

This result of 2.221 43 acceleration units, meaning 2.221 43 m/s2 , is the same as
Out[221].
1.12 Units 31

We can do this type of calculation in any consistent unit system: as a second


example, we use the unit definitions
ForceUnit = Quantity["KiloNewtons"];
MassUnit = Quantity["AtomicMassUnit"];
AccelerationUnit = UnitConvert[ForceUnit/MassUnit]
6.022141*10ˆ29 m/sˆ2

and calculate
F = Quantity[4*311, "kN"] / ForceUnit
1244
m = Quantity[560, "t"] / MassUnit
3.3723989*10ˆ32
a = F/m //N
3.68877*10ˆ-30

This result is again the same as Out[221], because 3.688 77 × 10−30 acceleration
units are 3.688 77 × 10−30 × 6.022 141 × 1029 m/s2 .
It is not important which unit system we use. In practice, it is often convenient to
use a system of units that yields dimensionless multipliers that are on the order of
unity; but this is not a strict requirement.
Chapter 2
Quantum Mechanics:
States and Operators

If you are like most students of quantum mechanics, then you have begun your
quantum studies by hearing stories about experiments such as Young’s double slit,1
the Stern–Gerlach spin quantization,2 and Heisenberg’s uncertainty principle.3 Many
concepts and analogies are introduced to get an idea of what quantum mechanics is
about and to begin to develop an intuition for it. Yet there is a large gap between this
kind of qualitative understanding and being able to solve even the simplest quantum-

Electronic supplementary material The online version of this chapter


(https://doi.org/10.1007/978-981-13-7588-0_2) contains supplementary material, which is
available to authorized users.

1 See https://en.wikipedia.org/wiki/Double-slit_experiment.
2 See https://en.wikipedia.org/wiki/Stern-Gerlach_experiment.
3 See https://en.wikipedia.org/wiki/Uncertainty_principle.

© Springer Nature Singapore Pte Ltd. 2020 33


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0_2
34 2 Quantum Mechanics: States and Operators

mechanical problems on a computer, essentially because a computer only works with


numbers, not with stories, analogies, or visualizations.
The goal of this chapter is to connect the fundamental quantum-mechanical con-
cepts to representations that a computer can understand. We develop the tools that
will be used in the remaining chapters to express and solve interesting quantum-
mechanical problems.

2.1 Basis Sets and Representations

Quantum-mechanical problems are usually specified in terms of operators and quan-


tum states. The quantum states are elements of a Hilbert space; the operators act
on such vectors. How can these objects be represented on a computer, which only
understands numbers but not Hilbert spaces?
In order to find a computer-representable form of these abstract objects, we assume
that we know an ortho-normal4 basis {|i}i of this Hilbert space, with scalar product
i| j = δi j . In Sect. 2.4 we will talk about how to construct
such bases. For now we
make the assumption that this basis is complete, such that i |ii| = 1. We will see
in Sect. 2.1.1 how to deal with incomplete basis sets.
Given any operator  acting on this Hilbert space, we use the completeness
relation twice to find
  ⎡ ⎤
  
 = 1 ·  · 1 = |ii| ·  · ⎣ | j j|⎦ = i|Â| j |i j|. (2.1)


i j ij
Ai j

We define a numerical matrix A with elements Ai j = i|Â| j ∈ C to rewrite this as



 = Ai j |i j|. (2.2)
ij

The same can be done with a state vector |ψ: using the completeness relation,
 
 
|ψ = 1 · |ψ = |ii| · |ψ = i|ψ |i, (2.3)


i i
ψi

 with elements ψi = i|ψ ∈ C the state vector


and by defining a numerical vector ψ
is 
|ψ = ψi |i. (2.4)
i

4 The following calculations can be extended to situations where the basis is not ortho-normal. For
the scope of this book we are however not interested in this complication.
2.1 Basis Sets and Representations 35

Both the matrix A and the vector ψ  are complex-valued objects which can be repre-
sented in any computer system. Equations (2.2) and (2.4) serve to convert between
Hilbert-space representations and number-based (matrix/vector-based) representa-
tions. These equations are at the center of what it means to find a computer represen-
tation of a quantum-mechanical problem.

2.1.1 Incomplete Basis Sets

For infinite-dimensional Hilbert spaces we must usually content ourselves with finite
basis sets that approximate the low-energy physics (or, more generally, the physically
relevant dynamics) of the problem. In practice this means that an orthonormal basis
set may not be complete, 
|ii| = P̂, (2.5)
i

which is the projector onto that subspace of the full Hilbert space which the basis is
capable of describing. We denote Q̂ = 1 − P̂ as the complement of this projector:
Q̂ is the projector onto the remainder of the Hilbert space that is left out of this
truncated description. The equivalent of Eq. (2.1) is then

 = 1 ·  · 1 =( P̂ + Q̂) ·  · ( P̂ + Q̂) = P̂ ·  · P̂ + P̂ ·  · Q̂ + Q̂ ·  · P̂ + Q̂ ·  · Q̂

= Ai j |i j| + P̂ · Â · Q̂ + Q̂ · Â · P̂ + Q̂ · Â · Q̂ (2.6)



ij

neglected coupling to (high-energy) part neglected (high-energy) part
within described subspace

In the same way, the equivalent of Eq. (2.3) is



|ψ = 1 · |ψ = ( P̂ + Q̂) · |ψ = ψi |i + Q̂|ψ (2.7)


i

neglected (high-energy) part
within described subspace

Since Q̂ is the projector onto the neglected subspace, the component Q̂|ψ of Eq. (2.7)
is the part of the quantum state |ψ that is left out of the description in the truncated
basis. In specific situations we will need to make sure that all terms involving Q̂ in
Eqs. (2.6) and (2.7) can be safely neglected. See Eq. (4.28) for a problematic example
of an operator expressed in a truncated basis.

Variational Ground-State Calculations


Calculating the ground state of a Hamiltonian in an incomplete basis set is a spe-
cial case of the variational method.5 As we will see for example in Sect. 4.1.7, the

5 See https://en.wikipedia.org/wiki/Variational_method_(quantum_mechanics).
36 2 Quantum Mechanics: States and Operators

variational ground-state energy is always larger than the true ground-state energy.
When we add more basis functions, the numerically calculated ground-state energy
decreases monotonically. At the same time, the overlap (scalar product) of the numer-
ically calculated ground state with the true ground state monotonically increases to
unity. These convergence properties often allow us to judge whether or not a chosen
computational basis set is sufficiently complete.

2.1.2 Exercises

Q2.1 We describe a spin-1/2 system in the basis B containing the two states
   
ϑ ϑ
|⇑ϑ,ϕ  = cos |↑ + eiϕ sin |↓
2 2
   
ϑ ϑ
|⇓ϑ,ϕ  = −e−iϕ sin |↑ + cos |↓ (2.8)
2 2

1. Show that the basis B = {|⇑ϑ,ϕ , |⇓ϑ,ϕ } is orthonormal.


2. Show that the basis B is complete: |⇑ϑ,ϕ ⇑ϑ,ϕ | + |⇓ϑ,ϕ ⇓ϑ,ϕ | = 1.
3. Express the states |↑ and |↓ as vectors in the basis B.
4. Express the Pauli operators σ̂x , σ̂ y , σ̂z as matrices in the basis B.
5. Show that |⇑ϑ,ϕ  and |⇓ϑ,ϕ  are eigenvectors of σ̂(ϑ, ϕ) = σ̂x sin(ϑ) cos(ϕ) +
σ̂ y sin(ϑ) sin(ϕ) + σ̂z cos(ϑ). What are the eigenvalues?
Q2.2 The eigenstate basis for the description of the infinite square well of unit width
is made up of the ortho-normalized functions

x|n = φn (x) = 2 sin(nπx) (2.9)

defined on the interval [0, 1], with n ∈ {1, 2, 3, . . . }.


 ∞ 
1. Calculate the function P∞ (x, y) = x| n=1 |nn| |y.
2. In computer-based calculations we limit the basis set to n ∈ {1, 2, 3, . . . ,
n max } for some large value of n max .Using Mathematica, calculate the func-
n max
tion Pn max (x, y) = x| n=1 |nn| |y (use the Sum function). Make a plot
for n max = 10 (use the DensityPlot function).
3. What does the function P represent?

2.2 Time-Independent Schrödinger Equation

The time-independent Schrödinger equation is

Ĥ|ψ = E|ψ. (2.10)


2.2 Time-Independent Schrödinger Equation 37

As in Sect. 2.1 we use a computational basis to express the Hamiltonian operator Ĥ


and the quantum state ψ as
 
Ĥ = Hi j |i j|, |ψ = ψi |i. (2.11)
ij i

With these substitutions the Schrödinger equation becomes


⎡ ⎤   
  
⎣ Hi j |i j| ⎦ ψk |k = E ψ |
ij k 
 
Hi j ψk  j|k |i = Eψ |


i jk 
=δ jk
 
Hi j ψ j |i = Eψ | (2.12)
ij 

Multiplying this equation by m| from the left, and using the orthonormality of the
basis set, gives
 
m| Hi j ψ j |i = m| Eψ |
ij 
 
Hi j ψ j m|i = Eψ m|



ij 
=δmi =δm

Hm j ψ j = Eψm (2.13)
j

In matrix notation this can be written as

 = E ψ.
H ·ψ  (2.14)

This is the central equation of this book. It is the time-independent Schrödinger


equation in a form that computers can understand, namely an eigenvalue equation in
terms of numerical (complex) matrices and vectors.
If you think that there is no difference between Eqs. (2.10) and (2.14), then I invite
you to re-read this section as I consider it extremely important for what follows in
this course. You can think of Eq. (2.10) as an abstract relationship between operators
and vectors in Hilbert space, while Eq. (2.14) is a numerical representation of this
relationship in a concrete basis set {|i}i . They both contain the exact same informa-
tion (since we converted one to the other in a few lines of mathematics) but they are
conceptually very different, as one is understandable by a computer and the other is
not.
38 2 Quantum Mechanics: States and Operators

2.2.1 Diagonalization

The matrix form of Eq. (2.14) of the Schrödinger equation is an eigenvalue equation
as you know from linear algebra. Given a matrix of complex numbers H we can find
 i using Mathematica’s built-in procedures, as
the eigenvalues E i and eigenvectors ψ
described in Sect. 1.10.4.

2.2.2 Exercises

Q2.3 Express the spin-1/2 Hamiltonian

Ĥ = sin(ϑ) cos(ϕ)σ̂x + sin(ϑ) sin(ϕ)σ̂ y + cos(ϑ)σ̂z (2.15)

in the basis {|↑, |↓}, and calculate its eigenvalues and eigenvectors. NB:
σ̂x,y,z are the Pauli operators.

2.3 Time-Dependent Schrödinger Equation

The time-dependent Schrödinger equation is

d
i |ψ(t) = Ĥ(t)|ψ(t), (2.16)
dt

where the Hamiltonian Ĥ can have an explicit time dependence. This differential
equation has the formal solution

|ψ(t) = Û(t0 ; t)|ψ(t0 ) (2.17)

in terms of the propagator


  t  t  t  t  t
i t 1 1 i 1 2
Û (t0 ; t) = 1 − dt1 Ĥ(t1 ) − 2 dt1 dt2 Ĥ(t1 )Ĥ(t2 ) + 3 dt1 dt2 dt3 Ĥ(t1 )Ĥ(t2 )Ĥ(t3 )
 t0  t0 t0  t0 t0 t0
 t  t  t  t
1 1 2 3
+ 4 dt1 dt2 dt3 dt4 Ĥ(t1 )Ĥ(t2 )Ĥ(t3 )Ĥ(t4 ) + · · · (2.18)
 t0 t0 t0 t0

that propagates any state from time t0 to time t. An alternative form is given by the
Magnus expansion6 ∞ 

Û(t0 ; t) = exp ˆ
k (t0 ; t) (2.19)
k=1

6 See https://en.wikipedia.org/wiki/Magnus_expansion.
2.3 Time-Dependent Schrödinger Equation 39

with the contributions



i t
ˆ 1 (t0 ; t) = −
 dt1 Ĥ(t1 )
 t0
 t  t
1 1
ˆ 2 (t0 ; t) = −
 dt1 dt2 [Ĥ(t1 ), Ĥ(t2 )]
22 t0 t0
 t  t  t  
i 1 2
ˆ 3 (t0 ; t) =
 dt1 dt 2 dt3 [Ĥ(t1 ), [Ĥ(t2 ), Ĥ(t3 )]] + [Ĥ(t3 ), [Ĥ(t2 ), Ĥ(t1 )]]
63 t0 t0 t0
... (2.20)

This expansion in terms of different-time commutators is often easier to evaluate than


Eq. (2.18), especially when the contributions vanish for k > kmax (see Sect. 2.3.3 for
the case kmax = 1). Even if higher-order contributions do not vanish entirely, they
(usually) decrease in importance much more rapidly with increasing k than those of
Eq. (2.18). Also, even if the Magnus expansion is artificially truncated (neglecting
higher-order terms), the quantum-mechanical evolution is still unitary; this is not the
case for Eq. (2.18).
Notice that the exponential in Eq. (2.19) has an operator or a matrix as their
argument: in Mathematica this matrix exponentiation is done with the MatrixExp
function. It does not calculate the exponential element-by-element, but instead cal-
culates

∞ ∞
Ân An
e  = , eA = . (2.21)
n=0
n! n=0
n!

2.3.1 Time-Independent Basis

We express the quantum state again in terms of the chosen basis, which is assumed to
be time-independent. This leaves the time-dependence in the expansion coefficients,
 
Ĥ(t) = Hi j (t) |i j|, |ψ(t) = ψi (t) |i. (2.22)
ij i

Inserting these expressions into the time-dependent Schrödinger equation (2.16)


gives
⎡ ⎤
   
i ψ̇i (t) |i = ⎣ H jk (t) | jk|⎦ ψ (t) | = H jk (t)ψk (t) | j.
i jk  jk
(2.23)
Multiplying with m| from the left:

iψ̇m (t) = Hmk (t)ψk (t) (2.24)
k
40 2 Quantum Mechanics: States and Operators

or, in matrix notation,



iψ(t) 
= H(t) · ψ(t). (2.25)

Since the matrix H(t) is supposedly known, this equation represents a system of

coupled complex differential equations for the vector ψ(t), which can be solved on
a computer.

2.3.2 Time-Dependent Basis: Interaction Picture

It can be advantageous to use a time-dependent basis. The most frequently used such
basis is given by the interaction picture of quantum mechanics, where the Hamiltonian
can be split into a time-independent principal part Ĥ0 and a small time-dependent
part Ĥ1 :
Ĥ(t) = Ĥ0 + Ĥ1 (t). (2.26)

Assuming that we can diagonalize Ĥ0 , possibly numerically, such that the eigenfunc-
tions satisfy Ĥ0 |i = E i |i, we propose the time-dependent basis

|i(t) = e−iEi t/ |i. (2.27)

If we express any quantum state in this basis as


 
|ψ(t) = ψi (t) |i(t) = ψi (t)e−iEi t/ |i, (2.28)
i i

the time-dependent Schrödinger equation becomes

   
iψ̇i (t) + E i ψi (t) e−iEi t/ |i = ψ j (t)e−iE j t/ E j | j + ψ j (t)e−iE j t/ Ĥ1 (t) | j
i j j
  −iE j t/
iψ̇i (t)e−iEi t/ |i = ψ j (t)e Ĥ1 (t) | j (2.29)
i j

Multiply by k| from the left:


 
k| iψ̇i (t)e−iEi t/ |i = k| ψ j (t)e−iE j t/ Ĥ1 (t) | j
i j
 
−iE i t/
iψ̇i (t)e k|i = ψ j (t)e−iE j t/ k|Ĥ1 (t)| j


i j
=δki

iψ̇k (t) = ψ j (t)e−i(E j −Ek )t/ k|Ĥ1 (t)| j. (2.30)
j
2.3 Time-Dependent Schrödinger Equation 41

This is the same matrix/vector evolution expression as Eq. (2.25), except that here
the Hamiltonian matrix elements must be defined as

Hi j (t) = i|Ĥ1 (t)| je−i(E j −Ei )t/ . (2.31)

We see immediately that if the interaction Hamiltonian vanishes [Ĥ1 (t) = 0], then
the expansion coefficients ψi (t) become time-independent, as expected since they are
the coefficients of the eigenfunctions of the time-independent Schrödinger equation.
When a quantum-mechanical system is composed of different parts that have
vastly different energy scales of their internal evolution Ĥ0 , then the use of Eq. (2.31)
can have great numerical advantages. It turns out that the relevant interaction
terms Hi j (t) in the interaction picture will have relatively slowly evolving phases
exp[−i(E j − E i )t/], on a time scale given by relative energy differences and not by
absolute energies; this makes it possible to solve the coupled differential equations
of Eq. (2.25) numerically without using an absurdly small time step.

 
2.3.3 Special Case: Ĥ(t), Ĥ(t  ) = 0 ∀(t, t  )

 
If the Hamiltonian commutes with itself at different times, Ĥ(t), Ĥ(t ) = 0 ∀(t, t ),
the propagator (2.19) of Eq. (2.16) can be simplified to
  
i t
Û(t0 ; t) = exp − Ĥ(s)ds , (2.32)
 t0

and the corresponding solution of Eq. (2.25) is


  
 i t  0 ).
ψ(t) = exp − H(s)ds · ψ(t (2.33)
 t0

Again, these matrix exponentials are calculated with MatrixExp in Mathematica.

2.3.4 Special Case: Time-Independent Hamiltonian

In the special (but common) case where the Hamiltonian is time-independent, the
integral in Eq. (2.33) can be evaluated immediately, and the solution is
 
 i(t − t0 )  0 ).
ψ(t) = exp − H · ψ(t (2.34)

42 2 Quantum Mechanics: States and Operators

If we have a specific Hamiltonian matrix H defined, for example the matrix of


Sect. 1.10.4, we can calculate the propagator U(t) = exp[−iHt/] for t =
t − t0 with
U[\[CapitalDelta]t_] = MatrixExp[-I*H*\[CapitalDelta]t/\[HBar]]

The resulting expression for U[\[CapitalDelta]t] will in general be very long, and slow to
compute. A more efficient definition is to matrix-exponentiate a numerical matrix
for specific values of the propagation interval \[CapitalDelta]t, using a delayed assignment:
U[\[CapitalDelta]t_?NumericQ] := MatrixExp[-I*H*N[\[CapitalDelta]t]/\[HBar]]

2.3.5 Exercises

Q2.4 Demonstrate that the propagator (2.32) gives a quantum state (2.17) that sat-
isfies Eq. (2.16).
Q2.5 Calculate the propagator of the Hamiltonian of Q2.3.
Q2.6 After In[234] and In[235], check ?U. Which definition of U comes first?
Why?

2.4 Basis Construction

In principle, the choice of basis set {|i}i does not influence the way a computer pro-
gram like Mathematica solves a quantum-mechanical problem. In practice, however,
we always need a constructive way to find some basis for a given quantum-mechanical
problem. A basis that takes the system’s Hamiltonian into account may give a compu-
tationally simpler description; but in complicated systems it is often more important
to find any way of constructing a usable basis set than finding the perfect one.

2.4.1 Description of a Single Degree of Freedom

When we describe a single quantum-mechanical degree of freedom, it is often pos-


sible to deduce a useful basis set from knowledge of the Hilbert space itself. This
is what we will be doing in Chap. 3 for spin systems, where the well-known Dicke
basis {|S, M S } SMS =−S turns out to be very useful.
For more complicated degrees of freedom, we can find inspiration for a basis
choice from an associated Hamiltonian. Such Hamiltonians describing a single
degree of freedom are often so simple that they can be diagonalized by hand. If this
2.4 Basis Construction 43

is not the case, real-world Hamiltonians Ĥ can often be decomposed like Eq. (2.26)
into a “simple” part Ĥ0 that is time-independent and can be diagonalized easily,
and a “difficult” part Ĥ1 that usually contains complicated interactions and/or time-
dependent terms but is of smaller magnitude. A natural choice of basis set is the set
of eigenstates of Ĥ0 , or at least those eigenstates below a certain cutoff energy since
they will be optimally suited to describe the complete low-energy behavior of the
degree of freedom in question. This latter point is especially important for infinite-
dimensional systems (Chap. 4), where any computer representation will necessarily
truncate the dimensionality, as discussed in Sect. 2.1.1.

Examples of Basis Sets for Single Degrees of Freedom:

• spin degree of freedom: Dicke states |S, M S  (see Chap. 3)


• translational degree of freedom: square-well eigenstates, harmonic oscillator
eigenstates (see Chap. 4)
• rotational degree of freedom: spherical harmonics
• atomic system: hydrogen-like orbitals
• translation-invariant system: periodic plane waves
• periodic system (crystal): periodic plane waves on the reciprocal lattice.

2.4.2 Description of Coupled Degrees of Freedom

A broad range of quantum-mechanical systems of interest are governed by Hamilto-


nians of the form  N 

(k)
Ĥ(t) = Ĥ (t) + Ĥint (t), (2.35)
k=1

where N individual degrees of freedom are governed by their individual Hamiltonians


Ĥ(k) (t), while their interactions are described by Ĥint (t). This is a situation we
will encounter repeatedly as we construct more complicated quantum-mechanical
problems from simpler parts. A few simple examples are:
• A set of N interacting particles: the Hamiltonians Ĥ(k) describe the individual
particles, while Ĥint describes their interactions (see Sect. 3.4).
• A single particle moving in three spatial degrees of freedom: the three Hamil-
2 ∂ 2 2 ∂ 2 2 ∂ 2
tonians Ĥ(x) = − 2m ∂x 2
, Ĥ(y) = − 2m ∂y 2
, Ĥ(z) = − 2m ∂z 2
describe the kinetic
energy in the three directions, while Ĥint contains the potential energy, which
usually couples these three degrees of freedom (see Sect. 4.4).
• A single particle with internal (spin) and external (motional) degrees of freedom,
which are coupled through a state-dependent potential in Ĥint (see Chap. 5).
The existence of individual Hamiltonians Ĥ(k) assumes that the Hilbert space of the
complete system has a tensor-product structure
44 2 Quantum Mechanics: States and Operators

V = V (1) ⊗ V (2) ⊗ · · · ⊗ V (N ) , (2.36)

where each Hamiltonian Ĥ(k) acts only in a single component space,

Ĥ(k) = 1(1) ⊗ 1(2) ⊗ · · · ⊗ 1(k−1) ⊗ ĥ (k) ⊗ 1(k+1) ⊗ · · · ⊗ 1(N ) . (2.37)

Further, if we are able to construct bases {|i k (k) }inkk=1 for all of the component Hilbert
spaces V (k) , as in Sect. 2.4.1, then we can construct a basis for the full Hilbert space
V by taking all possible tensor products of basis functions:

|i 1 , i 2 , . . . , i N  = |i 1 (1) ⊗ |i 2 (2) ⊗ · · · ⊗ |i N (N ) . (2.38)


N
This basis will have k=1 n k elements, which can easily become a very large number
for composite systems.

Quantum States

A product state of the complete system

|ψ = |ψ1 (1) ⊗ |ψ2 (2) ⊗ · · · ⊗ |ψ N (N ) (2.39)

can be described in the following way. First, each single-particle state is decomposed
in its own basis as in Eq. (2.4),


nk
|ψk (k) = ψi(k)
k
|i k (k) . (2.40)
i k =1

Inserting these expansions into Eq. (2.39) gives the expansion into the basis func-
tions 2.38 of the full system,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤

n1  n2 nN
|ψ = ⎣ ψi(1)
1
|i 1 (1) ⎦ ⊗ ⎣ ψi(2)
2
|i 2 (2) ⎦ ⊗ · · · ⊗ ⎣ ψi(N
N
)
|i N (N ) ⎦
i 1 =1 i 2 =1 i N =1


n1 
n2 nN 
 
(1) (2) (N )
= ··· ψi1 ψi2 . . . ψi N |i 1 , i 2 , . . . , i N 
i 1 =1 i 2 =1 i N =1
(2.41)

In Mathematica, such a state tensor product can be calculated as follows. For example,
assume that \[Psi]1 is a vector containing the expansion of |ψ1 (1) in its basis, and
similarly for \[Psi]2 and \[Psi]3. The vector \[Psi] of expansion coefficients of the full state
|ψ = |ψ1 (1) ⊗ |ψ2 (2) ⊗ |ψ3 (3) is calculated with
\[Psi] = Flatten[KroneckerProduct[\[Psi]1, \[Psi]2, \[Psi]3]]

See Q2.10 for a numerical example.


2.4 Basis Construction 45

More generally, any state can be written as


n1 
n2 
nN
|ψ = ··· ψi1 ,i2 ,...,i N |i 1 , i 2 , . . . , i N , (2.42)
i 1 =1 i 2 =1 i N =1

of which Eq. (2.41) is a special case with ψi1 ,i2 ,...,i N = ψi(1)
1
ψi(2)
2
· · · ψi(N
N
)
.

Operators

If the Hilbert space has the tensor-product structure of Eq. (2.36), then the operators
acting on this full space are often given as tensor products as well,

 = â1(1) ⊗ â2(2) ⊗ · · · ⊗ â (N
N ,
)
(2.43)

or as a sum over such products. If every single-particle operator is decomposed in


its own basis as in Eq. (2.2),


nk 
nk
âk(k) = ai(k) |i (k)  jk |(k) ,
k , jk k
(2.44)
i k =1 jk =1

inserting these expressions into Eq. (2.43) gives the expansion into the basis func-
tions 2.38 of the full system,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤

n1 
n1
(1)

n2 
n2
(2)

nN 
nN
(N )
 = ⎣ ai1 , j1 |i 1 (1)  j1 |(1) ⎦ ⊗ ⎣ ai2 , j2 |i 2 (2)  j2 |(2) ⎦ ⊗ · · · ⊗ ⎣ ai N , j N |i N (N )  j N |(N ) ⎦
i 1 =1 j1 =1 i 2 =1 j2 =1 i N =1 j N =1


n1 
n1 
n2 
n2  nN 
nN  
(1) (2) (N )
= ··· ai1 , j1 ai2 , j2 . . . ai N , j N |i 1 , i 2 , . . . , i N  j1 , j2 , . . . , j N |.
i 1 =1 j1 =1 i 2 =1 j2 =1 i N =1 j N =1
(2.45)

In Mathematica, such an operator tensor product can be calculated similarly to


In[236] above. For example, assume that a1 is a matrix containing the expan-
sion of â1(1) in its basis, and similarly for a2 and a3. The matrix A of expansion
coefficients of the full operator  = â1(1) ⊗ â2(2) ⊗ â3(3) is calculated with
A = KroneckerProduct[a1, a2, a3]

Often we need to construct operators which act only on one of the component
spaces, as in Eq. (2.37). For example, in a 3-composite system the subsystem Hamil-
tonians ĥ (1) , ĥ (2) , and ĥ (3) are first expanded to the full Hilbert space,
46 2 Quantum Mechanics: States and Operators

H1 = KroneckerProduct[h1,
IdentityMatrix[Dimensions[h2]],
IdentityMatrix[Dimensions[h3]]];
H2 = KroneckerProduct[IdentityMatrix[Dimensions[h1]],
h2,
IdentityMatrix[Dimensions[h3]]];
H3 = KroneckerProduct[IdentityMatrix[Dimensions[h1]],
IdentityMatrix[Dimensions[h2]],
h3];

where IdentityMatrix[Dimensions[h1]] generates a unit matrix of size


equal to that of h1. In this way, the matrices H1, H2, H3 are of equal size and can
be added together, even if h1, h2, h3 all have different sizes (expressed in Hilbert
spaces of different dimensions):
H = H1 + H2 + H3;

More generally, any operator can be written as


n1 
n1 
n2 
n2 
nN 
nN
 = ··· ai1 , j1 ,i2 , j2 ,...,i N , jN |i 1 , i 2 , . . . , i N  j1 , j2 , . . . , j N |,
i 1 =1 j1 =1 i 2 =1 j2 =1 i N =1 j N =1
(2.46)
of which Eq. (2.45) is a special case with ai1 , j1 ,i2 , j2 ,...,i N , jN = ai(1) a (2)
1 , j1 i 2 , j2
· · · a (N )
i N , jN .

2.4.3 Reduced Density Matrices [Supplementary Material 1]

In this section we calculate reduced density matrices by partial tracing. We start with
the most general tripartite case, and then specialize to the more common bipartite
case.
Assume that our quantum-mechanical system is composed of three parts A, B, C,
and that its Hilbert space is a tensor product of the three associated Hilbert spaces
with dimensions dA , dB , dC : V = V (A) ⊗ V (B) ⊗ V (C) . Similar to Eq. (2.46), any
state of this system can be written as a density matrix


dA 
dB 
dC
ρ̂ABC = ρi, j,k,i , j ,k |i A , jB , kC i A , jB , kC |, (2.47)
i,i =1 j, j =1 k,k =1

where we use the basis states |i A , jB , kC  = |i(A) ⊗ | j(B) ⊗ |k(C) defined in terms
of the three basis sets of the three component Hilbert spaces.
We calculate a reduced density matrix ρ̂AC = Tr B ρ̂ABC , which describes what
happens to our knowledge of the subsystems A and C when we forget about subsystem
B. For example, we could be studying a system of three particles, and take an interest
in the state of particles A and C after we have lost particle B. This reduced density
matrix is defined as a partial trace,
2.4 Basis Construction 47
⎡ ⎤
dB
 dB
 dA dB dC
ρ̂AC =  jB |ρ̂ABC | jB  =  jB |⎣ , j , k |⎦| j 
ρi, j,k,i , j ,k |i A , jB , kC i A B C B
j =1
j =1
i,i =1 j, j =1 k,k =1

 dA
dB  dB
 dC

= ρi, j,k,i , j ,k  jB |i A , jB , kC i A
, j , k | j 
B C B
j =1 i,i =1 j, j =1 k,k =1
dB 
 dA dB
 dC
   
= , k |
ρi, j,k,i , j ,k δ j , j |i A , kC  δ j , j i A C
j =1 i,i =1 j, j =1 k,k =1
⎡ ⎤
dA
 dC
 dB

= ⎣ , k |,
ρi, j,k,i , j,k ⎦ |i A , kC i A (2.48)
C
i,i =1 k,k =1 j=1

which makes no reference to subsystem B. It only describes the joint system AC that
is left after forgetting about subsystem B.
In Mathematica, we mostly use flattened basis sets, that is, our basis set for the
joint Hilbert space of subsystems A, B, C is a flat list of length d = dA dB dC :

{|1A , 1B , 1C , |1A , 1B , 2C , . . . , |1A , 1B , dC , |1A , 2B , 1C , |1A , 2B , 2C , . . . , |1A , 2B , dC , . . . , |dA , dB , dC }.


(2.49)
In Sect. 1.10.5 we have seen how lists and tensors can be re-shaped. As we will
see below, these tools are used to switch between representations involving indices
(i, j, k) (i.e., lists with three indices, rank-three tensors) corresponding to Eq. (2.47),
and lists involving a single flattened-out index corresponding more to Eq. (2.49).
In practical calculations, any density matrix \[Rho]ABC of the joint system is given
as a d × d matrix whose element (u, v) is the prefactor of the contribution |uv|
with the indices u and v addressing elements in the flat list of Eq. (2.49). In order to
calculate a reduced density matrix, we first reshape this d × d density matrix \[Rho]ABC
into a rank-six tensor R with dimensions dA × dB × dC × dA × dB × dC , and with
elements ri, j,k,i , j ,k of Eq. (2.47):
R = ArrayReshape[\[Rho]ABC, {dA,dB,dC,dA,dB,dC}]

Next, we contract indices 2 and 5 of R in order to do the partial trace over subsystem
B, as is done in Eq. (2.48) (effectively setting j = j and summing over j). We find
a rank-4 tensor S with dimensions dA × dC × dA × dC :
S = TensorContract[R, {2,5}]

Finally, we flatten out this tensor again (simultaneously combining indices 1&2
and 3&4) to find the dA dC × dA dC reduced density matrix \[Rho]AC:
\[Rho]AC = Flatten[S, {{1,2}, {3,4}}]
48 2 Quantum Mechanics: States and Operators

We assemble all of these steps into a generally usable function:


rdm[\[Rho]ABC_?MatrixQ, {dA_Integer /; dA >= 1,
dB_Integer /; dB >= 1,
dC_Integer /; dC >= 1}] /;
Dimensions[\[Rho]ABC] == {dA*dB*dC, dA*dB*dC} :=
Flatten[TensorContract[ArrayReshape[\[Rho]ABC, {dA,dB,dC,dA,dB,dC}], {2,5}],
{{1,2}, {3,4}}]

When our system is in a pure state, ρ̂ABC = |ψψ|, this procedure can be simpli-
fied greatly. This is particularly important for large system dimensions, where cal-
culating the full density matrix ρ̂ABC dAmaydbe impossible
dC due to memory constraints.
For this, we assume that |ψ = i=1 B
j=1 k=1 ψi, j,k |i A , jB , kC , and therefore
ρi, j,k,i , j ,k = ψi, j,k ψi∗ , j ,k . Again, in Mathematica the coefficients of a state vector
\[Psi]ABC are a flat list referring to the elements of the flat basis of Eq. (2.49), and so
we start by constructing a rank-3 tensor P with dimensions dA × dB × dC , whose
elements are exactly the ψi, j,k , similar to In[242]:
P = ArrayReshape[\[Psi]ABC, {dA,dB,dC}]

We transpose this rank-three tensor into a dA × dC × dB tensor P1 and a dB ×


dA × dC tensor P2 by changing the order of the indices:
P1 = Transpose[P, {1, 3, 2}]
P2 = Transpose[P, {2, 1, 3}]

Now we can contract the index jB by a dot product, to find a rank-4 tensor Q with
dimensions dA × dC × dA × dC :
Q = P1 . Conjugate[P2]

Finally we flatten Q into the dA dC × dA dC reduced density matrix \[Rho]AC by com-


bining indices 1&2 and 3&4:
\[Rho]AC = Flatten[Q, {{1,2}, {3,4}}]

We assemble all of these steps into a generally usable function that extends the
definition of In[245]:
rdm[\[Psi]ABC_?VectorQ, {dA_Integer /; dA >= 1,
dB_Integer /; dB >= 1,
dC_Integer /; dC >= 1}] /;
Length[\[Psi]ABC] == dA*dB*dC :=
With[{P = ArrayReshape[\[Psi]ABC, {dA,dB,dC}]},
Flatten[Transpose[P, {1,3,2}].ConjugateTranspose[P], {{1,2}, {3,4}}]]

Notice that we have merged the transposition of In[248] and the complex-
conjugation of In[249] into a single call of the ConjugateTranspose func-
tion.
2.4 Basis Construction 49

Bipartite Systems

Consider now the more common case of a bipartite system composed of only two
subsystems A and B. We can still use the definitions developed above for tripartite
(ABC) structures by introducing a trivial third subsystem with dimension dC = 1.
This trivial subsystem will not change anything since it must always be in its one
and only possible state. Therefore, given a density matrix \[Rho]AB of the joint system
AB, we calculate the reduced density matrices of subsystems A and B with
\[Rho]A = rdm[\[Rho]AB, {dA,dB,1}];
\[Rho]B = rdm[\[Rho]AB, {1,dA,dB}];

respectively, since it is always the middle subsystem of a given list of three subsystems
that is eliminated through partial tracing. In typical Mathematica fashion, we define
a traceout function that traces out the first d dimensions if d > 0 and the last d
dimensions if d < 0:
traceout[\[Rho]_?MatrixQ, d_Integer /; d >= 1] /;
Length[\[Rho]] == Length[Transpose[\[Rho]]] && Divisible[Length[\[Rho]], d] :=
rdm[\[Rho], {1, d, Length[\[Rho]]/d}]
traceout[\[Rho]_?MatrixQ, d_Integer /; d <= -1] /;
Length[\[Rho]] == Length[Transpose[\[Rho]]] && Divisible[Length[\[Rho]], -d] :=
rdm[\[Rho], {Length[\[Rho]]/(-d), -d, 1}]
traceout[\[Psi]_?VectorQ, d_Integer /; d >= 1] /; Divisible[Length[\[Psi]], d] :=
rdm[\[Psi], {1, d, Length[\[Psi]]/d}]
traceout[\[Psi]_?VectorQ, d_Integer /; d <= -1] /; Divisible[Length[\[Psi]], -d] :=
rdm[\[Psi], {Length[\[Psi]]/(-d), -d, 1}]

2.4.4 Exercises

Q2.7 Two particles of mass m are moving  in a three-dimensional harmonic poten-


tial V (r ) = 21 mω 2 r 2 with r = x 2 + y 2 + z 2 , and interacting via s-wave
scattering Vint = gδ 3 (r 1 − r2 ).
1. Write down the Hamiltonian of this system.
2. Propose a basis set in which we can describe the quantum mechanics of this
system.
3. Calculate the matrix elements of the Hamiltonian in this basis set.
Q2.8 Calculate \[Psi] in In[236] without using KroneckerProduct, but using
the Table command instead.
Q2.9 Calculate A in In[237] without using KroneckerProduct, but using
the Table command instead.
50 2 Quantum Mechanics: States and Operators

Q2.10 Given two spin-1/2 particles in states

|ψ(1) = 0.8|↑ − 0.6|↓, |ψ(2) = 0.6i|↑ + 0.8|↓, (2.50)

use the KroneckerProduct function to calculate the joint state |ψ =


|ψ(1) ⊗ |ψ(2) , and compare the result to a manual calculation. In which
order do the coefficients appear in the result of KroneckerProduct?
Q2.11 For the state of Eq. (2.50), calculate the reduced density matrices ρ(1) and ρ(2)
by tracing out the other subsystem. Compare them to the density matrices
|ψ(1) ψ|(1) and |ψ(2) ψ|(2) . What do you notice?

See also Q3.19 and Q3.20.


Chapter 3
Spin and Angular Momentum

In this chapter we put together everything we have studied so far—Mathematica,


quantum mechanics, computational bases, units—to study simple quantum systems.
We start our explorations of quantum mechanics with the description of angular
momentum. The reason for this choice is that, in contrast to the mechanically
more intuitive linear motion (Chap. 4), rotational motion is described with finite-
dimensional Hilbert spaces and thus lends itself as a relatively simple starting point.
As applications we look at the hyperfine structure of alkali atoms, lattice spin models,
and quantum circuits.

Electronic supplementary material The online version of this chapter


(https://doi.org/10.1007/978-981-13-7588-0_3) contains supplementary material, which is
available to authorized users.

© Springer Nature Singapore Pte Ltd. 2020 51


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0_3
52 3 Spin and Angular Momentum

3.1 Quantum-Mechanical Spin and Angular Momentum


Operators [Supplementary Material 1]

A classical rotational motion is described by its angular momentum, which is a three-


dimensional pseudovector1 whose direction indicates the rotation axis and whose
length gives the rotational momentum. For an isolated system, the angular momentum
is conserved and is thus very useful in the description of the system’s state.
In quantum mechanics, angular momentum is equally described by a three-
dimensional pseudovector operator S, ˆ with operator elements (in Cartesian coor-
ˆ
dinates) S = ( Ŝx , Ŝ y , Ŝz ). The joint eigenstates of the squared angular momentum
magnitude  Sˆ 2 = Ŝ 2 = Ŝx2 + Ŝ y2 + Ŝz2 and of the z-component Ŝz are called the
Dicke states |S, M, and satisfy

Ŝ 2 |S, M = S(S + 1) |S, M (3.1a)


Ŝz |S, M = M |S, M (3.1b)

For every integer or half-integer value of the angular momentum S ∈ {0, 21 , 1, 23 ,


2, . . .}, there is a set of 2S + 1 Dicke states |S, M with M ∈ {−S, −S + 1, . . . , S −
1, S} that form a basis for the description of the rotation axis orientation. These states
also satisfy the following relationships with respect to the x- and y-components of
the angular momentum:

Ŝ+ |S, M = S(S + 1) − M(M + 1) |S, M + 1 raising operator (3.2a)

Ŝ− |S, M = S(S + 1) − M(M − 1) |S, M − 1 lowering operator (3.2b)
Ŝ± = Ŝx ± i Ŝ y Cartesian components
(3.2c)

As you know, quantum mechanics is not limited to spins or angular momenta of


length S = 1/2.
In Mathematica we represent these operators in the Dicke basis as follows, with
the elements of the basis set ordered with decreasing projection quantum number M:
SpinQ[S_] := IntegerQ[2S] && S>=0
splus[0] = {{0}} //SparseArray;
splus[S_?SpinQ] := splus[S] =
SparseArray[Band[{1,2}] -> Table[Sqrt[S(S+1)-M(M+1)],
{M,S-1,-S,-1}], {2S+1,2S+1}]
sminus[S_?SpinQ] := Transpose[splus[S]]
sx[S_?SpinQ] := sx[S] = (splus[S]+sminus[S])/2
sy[S_?SpinQ] := sy[S] = (splus[S]-sminus[S])/(2I)
sz[S_?SpinQ] := sz[S] = SparseArray[Band[{1,1}]->Range[S,-S,-1], {2S+1,2S+1}]
id[S_?SpinQ] := id[S] = IdentityMatrix[2S+1, SparseArray]

1 See https://en.wikipedia.org/wiki/Pseudovector.
3.1 Quantum-Mechanical Spin and Angular Momentum Operators … 53

• Notice that we have defined all these matrix representations as sparse matrices
(see Sect. 1.10.3), which will make larger calculations much more efficient later
on. Further, all definitions are memoizing (see Sect. 1.6.3) to reduce execution
time when they are used repeatedly.
• The function SpinQ[S] yields True only if S is a nonnegative half-integer
value and can therefore represent a physically valid spin. In general, functions
ending in ...Q are questions on the character of an argument (see Sect. 1.6.4).
• The operator Ŝ+ , defined with splus[S], contains only one off-diagonal band
of non-zero values. The SparseArray matrix constructor allows building such
banded matrices by simply specifying the starting point of the band and a vector
with the elements of the nonzero band.
• The operator Ŝz , defined with sz[S], shows you the ordering of the basis ele-
ments since it has the projection quantum numbers on the diagonal.
• The last operator id[S] is the unit operator operating on a spin of length
S, and will be used below for tensor-product definitions. Note that the
IdentityMatrix function usually returns a full matrix, which is not suitable
for large-scale calculations. By giving it a SparseArray option, it returns a
sparse identity matrix of desired size.
• All these matrices can be displayed with, for example,
sx[3/2] //Normal
{{0, Sqrt[3]/2, 0, 0},
{Sqrt[3]/2, 0, 1, 0},
{0, 1, 0, Sqrt[3]/2},
{0, 0, Sqrt[3]/2, 0}}

or, for a more traditional view,


sx[3/2] //MatrixForm

3.1.1 Exercises

Q3.1 Verify that for S = 1/2 the above Mathematica definitions give the Pauli matri-
ces: Ŝi = 21 σ̂i for i = x, y, z.
Q3.2 Verify in Mathematica that for given integer or half-integer S, the three opera-
ˆ
tors (matrices) S = { Ŝx , Ŝ y , Ŝz } behave like a quantum-mechanical pseudovec-

tor of length  Sˆ = S(S + 1):

1. Show that [ Ŝx , Ŝ y ] = i Ŝz , [ Ŝ y , Ŝz ] = i Ŝx , and [ Ŝz , Ŝx ] = i Ŝ y .


2. Show that Ŝx2 + Ŝ y2 + Ŝz2 = S(S + 1)1.
3. What is the largest value of S for which you can do these verifications within
one minute (each) on your computer? Hint: use the Timing function.
Q3.3 The operators Ŝx,y,z are the generators of rotations: a rotation by an angle
α around the axis given by a normalized vector n is done with the operator
54 3 Spin and Angular Momentum

ˆ Set n = {sin(ϑ) cos(ϕ), sin(ϑ) sin(ϕ), cos(ϑ)} and


R̂n (α) = exp(−iα n · S).
calculate the operator R̂n (α) explicitly for S = 0, S = 1/2, and S = 1. Check
that for α = 0 you find the unit operator.

3.2 Spin-1/2 Electron in a DC Magnetic Field


[Supplementary Material 2]

As a first example we look at a single spin S = 1/2. We use the basis containing the
two states |↑ = | 21 , 21  and |↓ = | 21 , − 21 , which we know to be eigenstates of the
operators Ŝ 2 and Ŝz . The matrix expressions of the operators relevant for this system
are given by the Pauli matrices divided by two,
     
1 01 1 1 0 −i 1 1 1 0 1
Sx = = σx Sy = = σy Sz = = σz
2 10 2 2 i 0 2 2 0 −1 2
(3.3)

In Mathematica we enter these as


Sx = sx[1/2]; Sy = sy[1/2]; Sz = sz[1/2];

using the general definitions of angular momentum operators given in Sect. 3.1.
Alternatively, we can write
{Sx,Sy,Sz} = (1/2) * Table[PauliMatrix[i], {i,1,3}];

As a Hamiltonian we use the coupling of this electron spin to an external magnetic


ˆ
field, Ĥ = −μˆ · B. ˆ = μB ge S in terms of its
 The magnetic moment of the electron is μ
ˆ the Bohr magneton μB = 9.274 009 68(20) × 10−24 J/T, and the electron’s
spin S,
g-factor ge = −2.002 319 304 362 2(15).2 The Hamiltonian is therefore

Ĥ = −μB ge ( Ŝx Bx + Ŝ y B y + Ŝz Bz ). (3.4)

In our chosen matrix representation this Hamiltonian is

 
1 Bz Bx − iB y
H = −μB ge (Sx Bx + S y B y + Sz Bz ) = − μB ge . (3.5)
2 Bx + iB y −Bz

In order to implement this Hamiltonian, we first define a system of units. Here


we express magnetic field strengths in Gauss and energies in MHz times Planck’s

2 Notice that the magnetic moment of the electron is anti-parallel to its spin (ge < 0). The reason
for this is the electron’s negative electric charge. When the electron spin is parallel to the magnetic
field, the electron’s energy is higher than when they are anti-parallel.
3.2 Spin-1/2 Electron in a DC Magnetic Field … 55

constant (it is common to express energies in units of frequency, where the conversion
is sometimes implicitly done via Planck’s constant):
MagneticFieldUnit = Quantity["Gausses"];
EnergyUnit = Quantity["PlanckConstant"]*Quantity["MHz"] //UnitConvert;

In this unit system, the Bohr magneton is approximately 1.4 MHz/G:


\[Mu]B = Quantity["BohrMagneton"]/(EnergyUnit/MagneticFieldUnit) //UnitConvert
1.3996245

We define the electron’s g-factor with


ge = UnitConvert["ElectronGFactor"]
-2.00231930436

The Hamiltonian of Eq. (3.4) is then


H[Bx_, By_, Bz_] = -\[Mu]B * ge * (Sx*Bx+Sy*By+Sz*Bz)

Natural Units
An alternative choice of units, called natural units, is designed to simplify a calcu-
lation by making the numerical value of the largest possible number of quantities
equal to 1. In the present case, this would be achieved by relating the field and energy
units to each other in such a way that the Bohr magneton becomes equal to 1:
MagneticFieldUnit = Quantity["Gausses"];
EnergyUnit = MagneticFieldUnit * Quantity["BohrMagneton"] //UnitConvert;
\[Mu]B = Quantity["BohrMagneton"]/(EnergyUnit/MagneticFieldUnit) //UnitConvert
1.0000000

In this way, calculations can often be simplified substantially because the Hamiltonian
effectively becomes much simpler than it looks in other unit systems. We will be
coming back to this point in future calculations.

3.2.1 Time-Independent Schrödinger Equation

The time-independent Schrödinger equation for our spin-1/2 problem is, from
Eq. (2.14),  
1 Bz Bx − iB y
− μB ge · ψ = E ψ (3.6)
2 Bx + iB y −Bz

The eigenvalues of the Hamiltonian (in our chosen energy units) and eigenvectors
are calculated with:
Eigensystem[H[Bx,By,Bz]]
56 3 Spin and Angular Momentum

As described in Sect. 1.10.4 the output is a list with two entries, the first being
a list of eigenvalues and the second a list of associated eigenvectors. As long as
the Hamiltonian matrix is Hermitian, the eigenvalues will all be real-valued; but
the eigenvectors can be complex. Since the Hilbert space of this spin problem has
dimension 2, and the basis contains two vectors, there are necessarily two eigenvalues
and two associated eigenvectors of length 2. The eigenvalues can be called E ± =
 The list of eigenvalues is given in the Mathematica output as {E − , E + }.
± 21 μB ge  B.
Notice that these eigenvalues only depend on the magnitude of the magnetic field,
and not on its direction. This is to be expected: since there is no preferred axis in
this system, there cannot be any directional dependence. The choice of the basis as
the eigenstates of the Ŝz operator was entirely arbitrary, and therefore the energy
eigenvalues cannot depend on the orientation of the magnetic field with respect to
this quantization axis.
The associated eigenvectors are


Bz ±  B
ψ ± = { , 1}, (3.7)
Bx + iB y

which Mathematica returns as a list of lists, {ψ − , ψ + }. Notice that these eigenvectors
are not normalized.

3.2.2 Exercises

Q3.4 Calculate the eigenvalues (in units of J) and eigenvectors (ortho-normalized)


of an electron spin in a magnetic field of 1 T in the x-direction.
 = B[ex sin(ϑ) cos(ϕ) + ey sin(ϑ) sin(ϕ) + ez cos(ϑ)] and calculate the
Q3.5 Set B
eigenvalues and normalized eigenvectors of the electron spin Hamiltonian.

3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure


[Supplementary Material 3]

Ground-state Rubidium-87 atoms consist of a nucleus with spin I = 3/2, a single


valence electron (spin S = 1/2, orbital angular momentum L = 0, and therefore total
spin J = 1/2), and 36 core electrons that do not contribute any angular momentum.
In a magnetic field along the z-axis, the effective Hamiltonian of this system is3

ˆ ˆ
Ĥ = Ĥ0 + h Ahfs I · J − μB Bz (g I Iˆz + g S Ŝz + g L L̂ z ), (3.8)

3 See http://steck.us/alkalidata/rubidium87numbers.pdf.
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure … 57

where h is Planck’s constant, μB is the Bohr magneton, Ahfs = 3.417 341 305 452
145(45) GHz is the spin–spin coupling constant in the ground state of 87 Rb, g I =
+0.000 995 141 4(10) is the nuclear g-factor, g S = −2.002 319 304 362 2(15) is the
electron spin g-factor, and g L = −0.999 993 69 is the electron orbital g-factor.
The first part Ĥ0 of Eq. (3.8) contains all electrostatic interactions, core electrons,
nuclear interactions etc. We will assume that the system is in the ground state of
Ĥ0 , which means that the valence electron is in the 52 S1/2 state and the nucleus
is deexcited. This ground state is eight-fold degenerate and consists of the four
magnetic sublevels of the I = 3/2 nuclear spin, the two sublevels of the S = 1/2
electronic spin, and the single level of the L = 0 angular momentum. The basis for
the description of this atom is therefore the tensor product basis of a spin-3/2, a
spin-1/2, and a spin-0.4
The spin operators acting on this composite system are defined as in Sect. 2.4.2. For
example, the nuclear-spin operator Iˆx is extended to the composite system by acting
trivially on the electron spin and orbital angular momenta, Iˆx → Iˆx ⊗ 1 ⊗ 1. The
electron-spin operators are defined accordingly, for example Ŝx → 1 ⊗ Ŝx ⊗ 1. The
electron orbital angular momentum operators are, for example, L̂ x → 1 ⊗ 1 ⊗ L̂ x .
In Mathematica these operators are defined with
Ix = KroneckerProduct[sx[3/2], id[1/2], id[0]];
Iy = KroneckerProduct[sy[3/2], id[1/2], id[0]];
Iz = KroneckerProduct[sz[3/2], id[1/2], id[0]];
Sx = KroneckerProduct[id[3/2], sx[1/2], id[0]];
Sy = KroneckerProduct[id[3/2], sy[1/2], id[0]];
Sz = KroneckerProduct[id[3/2], sz[1/2], id[0]];
Lx = KroneckerProduct[id[3/2], id[1/2], sx[0]];
Ly = KroneckerProduct[id[3/2], id[1/2], sy[0]];
Lz = KroneckerProduct[id[3/2], id[1/2], sz[0]];

ˆ ˆ ˆ
The total electron angular momentum is J = S + L:
Jx = Sx + Lx; Jy = Sy + Ly; Jz = Sz + Lz;

ˆ ˆ ˆ
The total angular momentum of the 87 Rb atom is F = I + J:
Fx = Ix + Jx; Fy = Iy + Jy; Fz = Iz + Jz;

Before defining the system’s Hamiltonian, we declare a system of units. Any system
will work here, so we stay with units commonly used in atomic physics: magnetic
fields are expressed in Gauss, while energies are expressed in MHz times Planck’s
constant. As time unit we choose the microsecond:
MagneticFieldUnit = Quantity["Gausses"];
EnergyUnit = Quantity["PlanckConstant"] * Quantity["Megahertz"];
TimeUnit = Quantity["Microseconds"];

4 Thespin-0 subsystem is trivial and could be left out in principle. It is included here to show the
method in a more general way.
58 3 Spin and Angular Momentum

The numerical values of the Bohr Magneton and the reduced Planck constant in these
units are
\[Mu]Bn = Quantity["BohrMagneton"]/(EnergyUnit/MagneticFieldUnit)
1.3996245
\[HBar]n = Quantity["ReducedPlanckConstant"]/(EnergyUnit*TimeUnit)
0.15915494

Using these definitions we define the hyperfine Hamiltonian with magnetic field in
the z-direction as
Hhf = A(Ix.Jx+Iy.Jy+Iz.Jz) - \[Mu]B*Bz*(gI*Iz+gS*Sz+gL*Lz);
hfc = {\[Mu]B -> \[Mu]Bn, \[HBar] -> \[HBar]n,
A->Quantity["PlanckConstant"]*Quantity[3.417341305452145,"GHz"]/EnergyUnit,
gS -> -2.0023193043622,
gL -> -0.99999369,
gI -> +0.0009951414};

This yields the Hamiltonian as an 8 × 8 matrix, and we can calculate its eigenvalues
and eigenvectors with
{eval, evec} = Eigensystem[Hhf] //FullSimplify;

We plot the energy eigenvalues with


Plot[Evaluate[eval /. hfc], {Bz, 0, 3000},
Frame -> True, FrameLabel -> {"Bz / G", "E / MHz"}]

3.3.1 Eigenstate Analysis

In this section we analyze the results eval and evec from the Hamiltonian diago-
nalization above. For this we first need to define ortho-normalized eigenvectors since
in general we cannot assume evec to be ortho-normalized.
In general we can always define an ortho-normalized eigenvector set with
nevec = Orthogonalize[evec]

The problem with this definition is, however, immediately apparent if you look at the
output given by Mathematica: since no assumptions on the reality of the variables
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure … 59

were made, the orthogonalization is done in too much generality and quickly becomes
unwieldy. Even using Assuming and ComplexExpand, as in Sect. 1.11, does not
give satisfactory results. But if we notice that the eigenvectors in evec are all purely
real-valued and are already orthogonal, then a simple vector-by-vector normalization
is sufficient for calculating an ortho-normalized eigenvector set:
nevec = #/Sqrt[#.#] & /@ evec;
nevec . Transpose[nevec] //FullSimplify

The fact that In[301] finds a unit matrix implies that the vectors in nevec are ortho-
normal.
Field-Free Limit
In the field-free limit Bz = 0 the energy levels are
Assuming[A > 0, Limit[eval, Bz -> 0]]
{3A/4, 3A/4, -5A/4, 3A/4, -5A/4, 3A/4, -5A/4, 3A/4}

We see that the level with energy − 54 A is three-fold degenerate while the level with
energy 43 A is five-fold degenerate. This is also visible in the eigenvalue plot above.
Considering that we have coupled two spins of lengths I = 23 and J = 21 , we expect
the composite system to have either total spin F = 1 (three sublevels) or F = 2 (five
sublevels); we can make the tentative assignment that the F = 1 level is at energy
E 1 = − 45 A and the F = 2 level at E 2 = 43 A.
In order to demonstrate this assignment we express the matrix elements of the
operators F̂ 2 and F̂z in the field-free eigenstates, making sure to normalize these
eigenstates before taking the limit Bz → 0:
nevec0 = Assuming[A > 0, Limit[nevec, Bz -> 0]];
nevec0 . (Fx.Fx+Fy.Fy+Fz.Fz) . Transpose[nevec0]
nevec0 . Fz . Transpose[nevec0]

Notice that in this calculations we have used the fact that all eigenvectors are real,
which may not always be the case for other Hamiltonians. We see that the field-free
normalized eigenvectors nevec0 are eigenvectors of both F̂ 2 and F̂z , and from
looking at the eigenvalues we can identify them as

{|2, 2, |2, −2, |1, 0, |2, 0, |1, 1, |2, 1, |1, −1, |2, −1} (3.9)

in the notation |F, M F . These labels are often used to identify the energy eigenstates
even for small Bz = 0.
Low-Field Limit

For small magnetic fields, we series-expand the energy eigenvalues to first order in
Bz :
Assuming[A > 0, Series[eval, {Bz, 0, 1}] //FullSimplify]
60 3 Spin and Angular Momentum

From these low-field terms, in combination with the field-free level assignment,
we see that the F = 1 and F = 2 levels have effective g-factors of g1 = (−g S +
5g I )/4 ≈ 0.501 824 and g2 = −(−g S − 3g I )/4 ≈ −0.499 833, respectively, so that
their energy eigenvalues follow the form

E F,M F (Bz ) = E F (0) − μB M F g F Bz + O(Bz2 ). (3.10)

These energy shifts due to the magnetic field are called Zeeman shifts.
High-Field Limit
The energy eigenvalues in the high-field limit are infinite; but we can calculate their
lowest-order series expansions with
Assuming[\[Mu]B > 0 && gS < -gI < 0,
Series[eval, {Bz, Infinity, 0}] //FullSimplify]

From these expansions we can already identify the states in the eigenvalue plot above.
In order to calculate the eigenstates in the high-field limit we must again make
sure to normalize the states before taking the limit Bz → ∞5 :
nevecinf = Assuming[\[Mu]B > 0 && gS < -gI < 0,
FullSimplify[Limit[nevec, Bz -> Infinity], A > 0]]
{{1, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 1},
{0, 0, 0, -1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0},
{0, -1, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, -1, 0, 0},
{0, 0, 0, 0, 0, 0, 1, 0}}

From this we immediately identify the high-field eigenstates as our basis states in a
different order,

{| 23 , 21 , |− 23 , − 21 , | 21 , − 21 , |− 21 , 21 , | 23 , − 21 , | 21 , 21 , |− 21 , − 21 , |− 23 , 21 } (3.11)

where we have used the abbreviation |M I , M J  = | 23 , M I  ⊗ | 21 , M J . You can verify


this assignment by looking at the matrix elements of the Iˆz and Jˆz operators with
nevecinf . Iz . Transpose[nevecinf]
nevecinf . Jz . Transpose[nevecinf]

that in In[308] we use two stages of assumptions, using the assumption A > 0 only in
5 Note

FullSimplify but not in Limit. This is done in order to work around an inconsistency in
Mathematica 11.3.0.0, and may be simplified in a future edition.
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure … 61

3.3.2 “Magic” Magnetic Field

The energy eigenvalues of the low-field states |1, −1 and |2, 1 have almost the same
first-order magnetic field dependence since g1 ≈ −g2 (see low-field limit above). If
we plot their energy difference as a function of magnetic field we find an extremal
point:
Plot[eval[[6]]-eval[[7]]-2A /. hfc, {Bz, 0, 6}]

At the “magic” field strength B0 = 3.228 96 G the energy difference is independent


of the magnetic field (to first order):
NMinimize[eval[[6]] - eval[[7]] - 2 A /. hfc, Bz]
{-0.00449737, {Bz -> 3.22896}}

This is an important discovery for quantum information science with 87 Rb atoms.


If we store a qubit in the state |ϑ, ϕ = cos(ϑ/2)|1, −1 + eiϕ sin(ϑ/2)|2, 1 and
tune the magnetic field exactly to the magic value, then the experimentally unavoid-
able magnetic-field fluctuations will not lead to fluctuations of the energy difference
between the two atomic levels and thus will not lead to qubit decoherence. Very long
qubit coherence times can be achieved in this way.
For the present case where |g I |  |g S |, the magic field is approximately Bz ≈
16Ag I
3μ g 2
.
B S

3.3.3 Coupling to an Oscillating Magnetic Field

In this section we study the coupling of a 87 Rb atom to a weak oscillating magnetic


field. Such a field could be the magnetic part of an electromagnetic wave, whose
electric field does not couple to our atom in the electronic ground state. This calcu-
lation is a template for more general situations where a quantum-mechanical system
is driven by an oscillating field.
62 3 Spin and Angular Momentum

87
The Rb hyperfine Hamiltonian in the presence of an oscillating magnetic
field is
ˆ ˆ  · (g I Iˆ + g S Sˆ + g L L)

ac
Ĥ(t) = h Ahfs I · J − μB Bz (g I Iˆz + g S Ŝz + g L L̂ z ) − cos(ωt) × μB B
     
Ĥ0 −Ĥ1
(3.12)

where the static magnetic field is assumed


to be in the z
direction, as before. Unfortu-
nately, [Ĥ(t), Ĥ(t  )] = [Ĥ1 , Ĥ0 ] cos(ωt) − cos(ωt  ) = 0 in general, so we cannot
use the exact solution of Eq. (2.33) of the time-dependent Schrödinger equation. In
fact, the time-dependent Schrödinger equation of this system has no analytic solution
at all. In what follows we will calculate approximate solutions.
Since we have diagonalized the time-independent Hamiltonian Ĥ0 already, we
use its eigenstates as a basis for calculating the effect of the oscillating perturbation
Ĥ1 (t). In general, calling {|i}i=18
the set of eigenstates of Ĥ0 , with Ĥ0 |i = E i |i
for i ∈ {1 . . . 8}, we expand the general hyperfine state as in Eq. (2.28),


8
|ψ(t) = ψi (t)e−iEi t/ |i. (3.13)
i=1

The time-dependent Schrödinger equation for the expansion coefficients ψi (t) in this
interaction picture is given in Eq. (2.30): for i = 1 . . . 8 we have

E j −Ei  E −E  
1
8 8
−i −ω t i i  j −ω t
iψ̇i (t) = ψ j (t)e−i(E j −Ei )t/ cos(ωt)i|Ĥ1 | j = ψ j (t) e 
+e Ti j ,
2
j=1 j=1
(3.14)
where we have replaced cos(ωt) = 21 eiωt + 21 e−iωt and defined

 · (g I Iˆ + g S Sˆ + g L L)
ˆ | j.
ac
Ti j = i|Ĥ1 | j = −i| μB B (3.15)

From Eq. (3.14) we can proceed in various ways:

Transition matrix elements: The time-independent matrix elements Ti j of the per-


turbation Hamiltonian are called the transition matrix elements and describe
how the populations of the different eigenstates of Ĥ0 are coupled through the
oscillating field. We calculate them in Mathematica as follows:
H0 = A*(Ix.Jx + Iy.Jy + Iz.Jz) - \[Mu]B*Bz*(gS*Sz + gL*Lz + gI*Iz);
H1 = -\[Mu]B*(gS*(Bacx*Sx + Bacy*Sy + Bacz*Sz)
+ gI*(Bacx*Ix + Bacy*Iy + Bacz*Iz)
+ gL*(Bacx*Lx + Bacy*Ly + Bacz*Lz));
H[t_] = H0 + H1*Cos[\[Omega]*t];
{eval, evec} = Eigensystem[H0] //FullSimplify;
nevec = Map[#/Sqrt[#.#] &, evec];
T = Assuming[A > 0, nevec.H1.Transpose[nevec] //FullSimplify];
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure … 63

Looking at this matrix T we see that not all energy levels are directly coupled by
an oscillating magnetic field. For example, T1,2 = 0 indicates that the populations
of the states |1 and |2 can only be coupled indirectly through other states, but
not directly (hint: check T[[1,2]]).
Numerical solution: Equation (3.14) is a series of linear coupled differential equa-
tions, which we write down explicitly in Mathematica with
deqs = Table[I*\[HBar]*Subscript[\[Psi],i]’[t] ==
Sum[Subscript[\[Psi],j][t]*Exp[-I*(eval[[j]]-eval[[i]])*t/\[HBar]]
*Cos[\[Omega]*t]*T[[i,j]], {j, 8}], {i,8}];

Assuming concrete conditions, for example the initial state |ψ(t = 0) = |F =
2, M F = −2 which is the second eigenstate nevec[[2]] [see Eq. (3.9)], and
magnetic fields Bz = 3.228 96 G, Bxac = 100 mG, B yac = Bzac = 0, and an ac field
angular frequency of ω = 2π × 6827.9 MHz, we can find the time-dependent
state |ψ(t) with
S = NDSolve[Join[deqs /. hfc /.{Bz->3.22896, Bacx->0.1, Bacy->0, Bacz->0,
\[Omega]->2*\[Pi]*6827.9},
{Subscript[\[Psi],1][0]==0,Subscript[\[Psi],2][0]==1,
Subscript[\[Psi],3][0]==0,Subscript[\[Psi],4][0]==0,
Subscript[\[Psi],5][0]==0,Subscript[\[Psi],6][0]==0,
Subscript[\[Psi],7][0]==0,Subscript[\[Psi],8][0]==0}],
Table[Subscript[\[Psi],i][t],{i,8}], {t, 0, 30},
MaxStepSize->10ˆ(-5), MaxSteps->10ˆ7]

Notice that the maximum step size in this numerical solution is very small (10−5
time units or 10 ps), since it needs to capture the fast oscillations of more than
6.8 GHz. As a result, a large number of numerical steps is required, which makes
this way of studying the evolution very difficult in practice.
We plot the resulting populations with
Plot[Evaluate[Abs[Subscript[\[Psi],2][t] /. S[[1]]]ˆ2], {t, 0, 30}]

Plot[Evaluate[Abs[Subscript[\[Psi],7][t] /. S[[1]]]ˆ2], {t, 0, 30}]


64 3 Spin and Angular Momentum

We see that the population is mostly sloshing between Ĥ0 -eigenstates |2 ≈
|F = 2, M F = −2 and |7 ≈ |F = 1, M F = −1 [see Eq. (3.9)]. Each popula-
tion oscillation takes about 8.2 µs (the Rabi period), and we say that the Rabi
frequency is about 120 kHz.
  
E j −E i
Rotating-wave approximation: The time-dependent prefactor exp −i  −ω t
   E j −E i
E −E
+ exp i i  j − ω t of Eq. (3.14) oscillates very rapidly unless either  −
E i −E j
ω ≈ 0 or 
− ω ≈ 0, where one of its terms changes slowly in time. The
rotating-wave approximation (RWA) consists of neglecting all rapidly rotating
terms in Eq. (3.14). Assume that there is a single6 pair of states |i and | j such
that E i − E j ≈ ω, with E i > E j , while all other states have an energy differ-
ence far from ω. The RWA thus consists of simplifying Eq. (3.14) to
E −E

1 i i j −ω t 1
iψ̇i (t) ≈ ψ j (t)e  Ti j = ψ j (t)Ti j e−i t
2 E −E 
2
1 −i i  j −ω t 1
iψ̇ j (t) ≈ ψi (t)e T ji = ψi (t)T ji ei t
2 2
iψ̇k (t) ≈ 0 for k ∈/ {i, j} (3.16)

with T ji = Ti∗j and the detuning = ω − (E i − E j )/. All other terms in


Eq. (3.14) have been neglected because they rotate so fast in time that they
“average out” to zero. This approximate system of differential equations has the
exact solution
     

t Ti j
t
ψi (t) = e− 2 t ψi (0) cos
i
+i ψi (0) − ψ j (0) sin
2

2
   ∗   

t T
t
ψ j (t) = e 2 t ψ j (0) cos
i ij
−i ψ j (0) + ψi (0) sin
2

2
ψk (t) = ψk (0) for k ∈ / {i, j} (3.17)

6 Thefollowing derivation is readily extended to situations where several pairs of states have an
energy difference approximately equal to ω. In such a case we need to solve a larger system of
coupled differential equations.
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure … 65

in terms of the generalized Rabi frequency
= |Ti j |2 /2 + 2 . We can see
that the population sloshes back and forth (“Rabi oscillation”) between the two
levels |i and | j with angular frequency
, as we had seen numerically above.
We can verify this solution im Mathematica as follows. First we define
\[CapitalDelta] = \[Omega] - (Ei-Ej)/\[HBar];
\[CapitalOmega] = Sqrt[Tij*Tji/\[HBar]ˆ2 + \[CapitalDelta]ˆ2];

and the solutions


\[Psi]i[t_] = Eˆ(-I*\[CapitalDelta]*t/2)*(\[Psi]i0*Cos[\[CapitalOmega]*t/2]+I*(\[CapitalDelta]/\[CapitalOmega]*\[Psi]i0-Tij/(\[HBar]*\[CapitalOmega])*\[Psi]j0)
*Sin[\[CapitalOmega]*t/2]);
\[Psi]j[t_] = Eˆ(I*\[CapitalDelta]*t/2)*(\[Psi]j0*Cos[\[CapitalOmega]*t/2]-I*(\[CapitalDelta]/\[CapitalOmega]*\[Psi]j0+Tji/(\[HBar]*\[CapitalOmega])*\[Psi]i0)
*Sin[\[CapitalOmega]*t/2]);

With these definitions, we can check the Schrödinger equations (3.16):


FullSimplify[I*\[HBar]*\[Psi]i’[t] == (1/2) * \[Psi]j[t] * Exp[-I*\[CapitalDelta]*t]*Tij]
True
FullSimplify[I*\[HBar]*\[Psi]j’[t] == (1/2) * \[Psi]i[t] * Exp[I*\[CapitalDelta]*t]*Tji]
True

as well as the initial conditions


\[Psi]i[0]
\[Psi]i0
\[Psi]j[0]
\[Psi]j0

Dressed states: If we insert the RWA solutions, Eq. (3.17), into the definition of the
general hyperfine state, Eq. (3.13), and set all coefficients ψk = 0 for k ∈
/ {i, j},
and then write sin(z) = (eiz − e−iz )/(2i) and cos(z) = (eiz + e−iz )/2, we find
the state

−iE j t/
|ψ(t) ≈ ψi (t)e−iE i t/ |i + ψ j (t)e | j
 ⎧  ⎡ ⎤ ⎫
1 −i E i −
(
− ) t/ ⎨ 

 Ti j 

 Ti∗j ⎬
= e 2 ψi (0) 1 + − ψ j (0) |i + ⎣ψ j (0) 1 − − ψi (0) ⎦ eiωt | j
2 ⎩




 ⎧ ⎡ ⎤ ⎫
(
+ ) t/ ⎨      Ti∗j ⎬
1 −i E i + Ti j
+ e 2 ψi (0) 1 − + ψ j (0) |i + ⎣ψ j (0) 1 + + ψi (0) ⎦ eiωt | j .
2 ⎩




(3.18)

In order to interpret this state more clearly, we need to expand our view of
the problem to include the quantized driving field. For this we assume that the
driving mode of the field (for example, the used mode of the electromagnetic
field) in state |n contains n quanta of vibration (for example, photons), and has
an energy of E n = nω. The two states |i and | j describing our system, with
E i − E j ≈ ω, actually correspond to states in the larger system containing the
driving field. In this sense, we can say that the state |i, n, with the system in
state |i and the driving field containing n quanta, is approximately resonant
with the state | j, n + 1, with the system in state | j and the driving field con-
taining n + 1 quanta. A transition from |i to | j is actually a transition from
66 3 Spin and Angular Momentum

|i, n to | j, n + 1, where one quantum is added simultaneously to the driving


field in order to conserve energy (approximately). A transition from | j to |i
corresponds to the system absorbing one quantum from the driving field.
The energy of the quantized driving field contributes an additional time depen-
dence
|i → |i, ne−inωt , | j → | j, n + 1e−i(n+1)ωt , (3.19)

and Eq. (3.18) thus becomes

|ψ(t) ≈
 ⎧ ⎡ ⎤ ⎫
( −
) t/ ⎨      Ti∗j ⎬
1 −i E i +nω+ Ti j
e 2 ψi (0) 1 + − ψ j (0) |i, n + ⎣ψ j (0) 1 − − ψi (0) ⎦ | j, n + 1
2 ⎩




 ⎧ ⎡ ⎤ ⎫
( +
) t/ ⎨      Ti∗j ⎬
1 −i E i +nω+ Ti j
+ e 2 ψi (0) 1 − + ψ j (0) |i, n + ⎣ψ j (0) 1 + + ψi (0) ⎦ | j, n + 1
2 ⎩




1 −iE t/ 1
= e − |− + e−iE + t/ |+ (3.20)
2 2

With this substitution, the state consists of two components, called dressed states,
      
Ti j Ti∗j
|± = ψi (0) 1 ∓ ± ψ j (0) |i, n + ψ j (0) 1 ± ± ψi (0) | j, n + 1.





(3.21)
that are time-invariant apart from their energy (phase) prefactors. These energy
prefactors correspond to the effective energy of the dressed states in the presence
of the oscillating field,7

( ±
) (− ±
)
E ± = E i + nω + = E j + (n + 1)ω + . (3.22)
2 2
We look at these dressed states in two limits:
• On resonance ( = 0), we have 
= |Ti j |, and the dressed states of Eq. (3.21)
become
  
Ti j Ti∗j
|± = ψi (0) ± ψ j (0) |i, n + ψ j (0) ± ψi (0) | j, n + 1
|Ti j | |Ti j |
  
Ti j Ti∗j
= ψi (0) ± ψ j (0) |i, n ± | j, n + 1 ,
|Ti j | |Ti j |
(3.23)

which are equal mixtures of the original states |i, n and | j, n + 1. They have
energies

7 The instantaneous energy of a state is defined as E =  Ĥ  = i ∂t∂ . For a state |ψ(t) = e−iωt |φ
the energy is E = iψ(t)| ∂t∂ |ψ(t) = iφ|eiωt ∂t∂ e−iωt |φ = ω.
3.3 Coupled Spin Systems: 87 Rb Hyperfine Structure … 67

1 1
E ± = E i + nω ± |Ti j | = E j + (n + 1)ω ± |Ti j | (3.24)
2 2
in the presence of a resonant ac coupling field: the degeneracy of the levels |i, n
and | j, n + 1 is lifted, and the dressed states are split by E + − E − = |Ti j |.
|Ti j |2
• Far off-resonance ( → ±∞) we have
≈ | | + 22 | |
, and Eq. (3.20)
becomes
   
|Ti j |2 |Ti j |2
−i E i +nω− 4 t/ −i E j +(n+1)ω+ 4 t/
|ψ(t) ≈ e ψi (0)|i, n + e ψ j (0)| j, n + 1.
(3.25)

(Hint: to verify this, look at the cases → +∞ and → −∞ separately).


|Ti j |2
The energy levels |i, n and | j, n + 1 are thus shifted by ∓ 4 , respectively,
and there is no population transfer between the levels. That is, the dressed states
become equal to the original states. Remember that we had assumed E i > E j :
– For a blue-detuned drive ( → +∞), the upper level |i is lowered in energy
|Ti j |2
by E = 4 while the lower level | j is raised in energy by E.
– For a red-detuned drive ( → −∞), the upper level |i is raised in energy
|Ti j |2
by E = 4| | while the lower level | j is lowered in energy by E.
These shifts are called ac Zeeman shifts in this case, or level shifts more generally.
When the oscillating field is a light field, level shifts are often called light shifts
or ac Stark shifts.

3.3.4 Exercises

Q3.6 Take two angular momenta, for example I = 3 and J = 5, and calculate the
ˆ ˆ ˆ
eigenvalues of the operators Iˆ2 , Iˆz , Jˆ2 , Jˆz , F̂ 2 , and F̂z , where F = I + J.
Q3.7 In Q3.6 you have coupled two angular momenta but you have not used any
Clebsch–Gordan coefficients. Why not? Where do these coefficients appear?
Q3.8 For a spin of a certain length, for example S = 100, take the state |S, S (a
spin pointing in the +z direction) and calculate the expectation values  Ŝx ,
2 2 2
 Ŝ y ,  Ŝz ,  Ŝx2  −  Ŝx  ,  Ŝ y2  −  Ŝ y  ,  Ŝz2  −  Ŝz  . Hint: the expectation
value of an operator  is S, S| Â|S, S.
Q3.9 Use In[323] and In[324] to calculate the detuning and the generalized
Rabi frequency
for the 87 Rb solution of In[320], where the population
oscillates between the levels i = 2 and j = 7. What is the oscillation period
corresponding to
? Does it match the plots of In[321] and In[322]?
Q3.10 Do the presented alkali atom calculation for 23 Na: are there any magic field
values?
http://steck.us/alkalidata/sodiumnumbers.pdf
68 3 Spin and Angular Momentum

Q3.11 Do the presented alkali atom calculation for 85 Rb: are there any magic field
values?
http://steck.us/alkalidata/rubidium85numbers.pdf
Q3.12 Do the presented alkali atom calculation for 133 Cs: are there any magic field
values?
http://steck.us/alkalidata/cesiumnumbers.pdf
ac
Q3.13 Set B = 0 and B  = B(ex + ie y ) in the expression for T in In[318]. Which
transitions are allowed for such circularly-polarized light around the quanti-
zation axis? Hint: use Eq. (3.9) to identify the states.
ac
Q3.14 Set B = 0 and B  = B ez in the expression for T in In[318]. Which tran-
sitions are allowed for such linearly-polarized light along the quantization
axis? Hint: use Eq. (3.9) to identify the states.

3.4 Coupled Spin Systems: Ising Model in a Transverse


Field [Supplementary Material 4]

We now turn to larger numbers of coupled quantum-mechanical spins. A large class


of such coupled spin systems can be described with Hamiltonians of the form


N N −1
N

(k,k )
Ĥ = Ĥ(k) + Ĥint , (3.26)
k=1 k=1 k  =k+1

where the Ĥ(k) are single-spin Hamiltonians (for example couplings to a magnetic
(k,k  )
field) and the Ĥint are coupling Hamiltonians between two spins. Direct couplings
between three or more spins can usually be neglected.
As an example we study the dimensionless “transverse Ising” Hamiltonian

b (k) (k) (k+1)


N N
Ĥ = − Ŝx − Ŝz Ŝz (3.27)
2 k=1 k=1

acting on a ring of N spin-S systems where the (N + 1)st spin is identified with the
first spin. We can read off three limits from this Hamiltonian:
• For b → ±∞ the spin–spin coupling Hamiltonian can be neglected, and the
ground state will have all spins aligned with the ±x direction,

|ψ+∞  = |+x⊗N , |ψ−∞  = |−x⊗N . (3.28)

The system is therefore in a product state for b → ±∞, which means that there
is no entanglement between spins. In the basis of |S, M Dicke states, Eqs. (3.1)
and (3.2), the single-spin states making up these product states are
3.4 Coupled Spin Systems: Ising Model in a Transverse Field … 69


S  
2S
|+x = 2−S |S, M, (3.29a)
M=−S
M+S


S  
−S 2S
|−x = 2 (−1) M+S
|S, M, (3.29b)
M=−S
M+S

which are aligned with the x-axis in the sense that Ŝx |+x = S |+x and Ŝx |−x =
−S |−x.
• For b = 0 the Hamiltonian contains only nearest-neighbor ferromagnetic spin–
spin couplings − Ŝz(k) Ŝz(k+1) . We know that this Hamiltonian has two degenerate
ground states: all spins pointing up or all spins pointing down,

|ψ0↑  = |+z⊗N , |ψ0↓  = |−z⊗N , (3.30)

where in the Dicke-state representation of Eq. (3.1) we have |+z = |S, +S and
|−z = |S, −S. While these two states are product states, for |b|  1 the per-
!N |ψ ±|ψ 
turbing Hamiltonian − b2 k=1 Ŝx(k) is diagonal in the states 0↑ √2 0↓ , which are
|ψ0↑ +|ψ0↓ 
not product states. The exact ground state for 0 < b  1 is close to √
2
,
|ψ0↑ −|ψ0↓ 
and for −1  b < 0 it is close to √
2
. These are both maximally entangled
states (“Schrödinger cat states”).
Now we calculate the ground state |ψb  as a function of the parameter b, and compare
the results to the above asymptotic limits.

3.4.1 Basis Set

The natural basis set for describing a set of N coupled spins is the tensor-product
(k)
basis (see Sect. 2.4.2). In this basis, the spin operators Ŝx,y,z acting only on spin k
are defined as having a trivial action on all other spins, for example

Ŝx(k) → 1
 ⊗1⊗
· · · ⊗ 1 ⊗ Ŝx ⊗ 1
 ⊗ ·
· · ⊗ 1 . (3.31)
(k−1) (N −k)

In Mathematica such single-spin-S operators acting on spin k out of a set of N spins


are defined as follows. First we define the operator acting as â = a on the kth spin
out of a set of n spins, and trivially on all others:
op[S_?SpinQ, n_Integer, k_Integer, a_?MatrixQ] /;
1<=k<=n && Dimensions[a] == {2S+1,2S+1} :=
KroneckerProduct[IdentityMatrix[(2S+1)ˆ(k-1), SparseArray],
a,
IdentityMatrix[(2S+1)ˆ(n-k), SparseArray]]
70 3 Spin and Angular Momentum

Next, we specialize this to â = Ŝx , Ŝ y , Ŝz :


sx[S_?SpinQ, n_Integer, k_Integer] /; 1<=k<=n := op[S, n, k, sx[S]]
sy[S_?SpinQ, n_Integer, k_Integer] /; 1<=k<=n := op[S, n, k, sy[S]]
sz[S_?SpinQ, n_Integer, k_Integer] /; 1<=k<=n := op[S, n, k, sz[S]]

Notice that we have used n = N because the symbol N is already used internally in
Mathematica.
From these we assemble the Hamiltonian:
H[S_?SpinQ, n_Integer/;n>=3, b_] := -b/2*Sum[sx[S, n, k], {k, n}] -
Sum[sz[S, n, k].sz[S, n, Mod[k+1,n,1]], {k, n}]

The modulus Mod[k+1,n,1] represents the periodicity of the spin ring and ensures
that the index remains within 1 . . . N (i.e., a modulus with offset 1).

3.4.2 Asymptotic Ground States

The asymptotic ground states for b = 0 and b → ±∞ mentioned above are all prod-
uct states of the form |ψ = |θ ⊗N where |θ  is the state of a single spin. We form
an N -particle tensor product state of such single-spin states with
productstate[\[Theta]_?VectorQ, 1] = \[Theta];
productstate[\[Theta]_?VectorQ, n_Integer/;n>=2] :=
Flatten[KroneckerProduct @@ Table[\[Theta], n]]

in accordance with In[236]; notice that the case N = 1 requires special attention.
The particular single-spin states |+x, |−x, |+z, |−z we will be using are
xup[S_?SpinQ] := 2ˆ(-S)*Table[Sqrt[Binomial[2S,M+S]],{M,S,-S,-1}]
xdn[S_?SpinQ] := 2ˆ(-S)*Table[(-1)ˆ(M+S)*Sqrt[Binomial[2S,M+S]], {M,S,-S,-1}]
zup[S_?SpinQ] := SparseArray[1 -> 1, 2S+1]
zdn[S_?SpinQ] := SparseArray[-1 -> 1, 2S+1]

We can check that these are correct with


Table[sx[S].xup[S] == S*xup[S], {S, 0, 4, 1/2}]
{True, True, True, True, True, True, True, True, True}
Table[sx[S].xdn[S] == -S*xdn[S], {S, 0, 4, 1/2}]
{True, True, True, True, True, True, True, True, True}
Table[sz[S].zup[S] == S*zup[S], {S, 0, 4, 1/2}]
{True, True, True, True, True, True, True, True, True}
Table[sz[S].zdn[S] == -S*zdn[S], {S, 0, 4, 1/2}]
{True, True, True, True, True, True, True, True, True}

From these we construct the product states


allxup[S_?SpinQ,n_Integer/;n>=1] := productstate[xup[S],n]
allxdn[S_?SpinQ,n_Integer/;n>=1] := productstate[xdn[S],n]
allzup[S_?SpinQ,n_Integer/;n>=1] := productstate[zup[S],n]
allzdn[S_?SpinQ,n_Integer/;n>=1] := productstate[zdn[S],n]
3.4 Coupled Spin Systems: Ising Model in a Transverse Field … 71

3.4.3 Hamiltonian Diagonalization

We find the m lowest-energy eigenstates of this Hamiltonian with the procedures


described in Sect. 1.10.4: for example, with S = 1/2 and N = 20,8
With[{S = 1/2, n = 20},
(* Hamiltonian *)
h[b_] = H[S, n, b];
(* two degenerate ground states for b=0 *)
gs0up = allzup[S, n];
gs0dn = allzdn[S, n];
(* ground state for b=+Infinity *)
gsplusinf = allxup[S, n];
(* ground state for b=-Infinity *)
gsminusinf = allxdn[S, n];
(* numerically calculate lowest m eigenstates *)
Clear[gs];
gs[b_?NumericQ, m_Integer /; m>=1] := gs[b, m] = -Eigensystem[-h[N[b]], m,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}] //
Transpose //Sort //Transpose;
]

Comments:
• gs0up = |ψ0↑  and gs0dn = |ψ0↓  are the exact degenerate ground states
for b = 0; gsplusinf = |ψ+∞  and gsminusinf = |ψ−∞  are the exact
nondegenerate ground states for b = ±∞.
• The function gs, which calculates the m lowest-lying eigenstates of the Hamil-
tonian, remembers its calculated values (see Sect. 1.6.3): this is important here
because such eigenstate calculations can take a long time when n is large.
• The function gs numerically calculates the eigenvalues using h[N[b]] as
a Hamiltonian, which ensures that the Hamiltonian contains floating-point
machine-precision numbers instead of exact numbers in case b is given as
an exact number. Calculating the eigenvalues and eigenvectors of a matrix of
exact numbers takes extremely long (please try: on line 13 of In[350] replace
-Eigensystem[-h[N[b]], ... with -Eigensystem[-h[b], ...
and compare the run time of gs[1, 2] with that of gs[1.0, 2].).
• The operations //Transpose //Sort //Transpose on line 15 of In[350]
ensure that the eigenvalues (and associated eigenvectors) are sorted in ascending
energy order (see In[150]).
• When the ground state is degenerate, which happens here for b ≈ 0, the Arnoldi
algorithm has some difficulty finding the correct degeneracy. This means that
gs[0,2] may return two non-degenerate eigenstates instead of the (correct) two
degenerate ground states. This is a well-known problem that can be circumvented
by calculating more eigenstates.
• A problem involving N spin-S systems leads to matrices of size (2S + 1) N ×
(2S + 1) N . This scaling quickly becomes very problematic (even if we use sparse
matrices) and is at the center of why quantum mechanics is difficult. Imagine
a system composed of N = 1000 spins S = 1/2: its state vector is a list of

8 The attached Mathematica code uses N = 14 instead, since calculations with N = 20 take a long
time.
72 3 Spin and Angular Momentum

21000 = 1.07 × 10301 complex numbers! Comparing this to the fact that there
are only about 1080 particles in the universe, we conclude that such a state
vector could never be written down and therefore the Hilbert space method of
quantum mechanics we are using here is fundamentally flawed. But as this is an
introductory course, we will stick to this classical matrix-mechanics formalism
and let the computer bear the weight of its complexity. Keep in mind, though,
that this is not a viable strategy for large systems, as each doubling of computer
capacity only allows us to add a single spin to the system, which, using Moore’s
law, allows us to add one spin every two years.9
There are alternative formulations of quantum mechanics, notably the path-
integral formalism, which partly circumvent this problem; but the computational
difficulty is not eliminated, it is merely shifted. Modern developments such as
tensor networks10 try to limit the accessible Hilbert space by restricting calcula-
tions to a subspace where the entanglement between particles is bounded. This
makes sense since almost all states of the huge Hilbert space are so complex and
carry such complicated quantum-mechanical entanglement that (i) they would
be extremely difficult to generate with realistic Hamiltonians, and (ii) they would
decohere within very short time.

3.4.4 Analysis of the Ground State

Energy Gap

Much of the behavior of our Ising spin chain can be seen in a plot of the energy gap,
which is the energy difference between the ground state and the first excited state.
With m = 2 we calculate the two lowest-lying energy levels and plot their energy
difference as a function of the parameter b:
With[{bmax = 3, db = 1/64, m = 2},
ListLinePlot[Table[{b, gs[b,m][[1,2]]-gs[b,m][[1,1]]},
{b, -bmax, bmax, db}]]]

Notice how the fact that the gs function remembers its own results speeds up this
calculation by a factor of 2 (see Sect. 1.6.3).

9 Moore’s law is the observation that over the history of computing hardware, the number of transis-

tors on integrated circuits doubles approximately every two years. From https://en.wikipedia.org/
wiki/Moore’s_law.
10 Matrix product states and tensor networks: https://en.wikipedia.org/wiki/Matrix_product_state.
3.4 Coupled Spin Systems: Ising Model in a Transverse Field … 73

Even in this small 20-spin simulation we can see that this gap is approximately
"
0 if |b| < 1,
E1 − E0 ≈ |b|−1 (3.32)
2
if |b| > 1.

This observation of a qualitative change in the excitation gap suggests that at b = ±1


the system undergoes a quantum phase transition (i.e., a phase transition induced
by quantum fluctuations instead of thermal fluctuations). We note that the gap of
Eq. (3.32) is independent of the particle number N and is therefore a global property
of the Ising spin ring, not a property of each individual spin (in which case it would
scale with N ).
Overlap with Asymptotic States
Once a ground state |ψb  has been calculated, we compute its overlap with the
asymptotically known states using scalar products. Notice that for b = 0 we calculate
|ψ ±|ψ 
the scalar products with the states 0↑ √2 0↓ as they are the approximate ground states
for |b|  1.
With[{bmax = 3, db = 1/64, m = 2},
ListLinePlot[
Table[{{b, Abs[gsminusinf.gs[b,m][[2,1]]]ˆ2},
{b, Abs[gsplusinf.gs[b, m][[2,1]]]ˆ2},
{b, Abs[((gs0up-gs0dn)/Sqrt[2]).gs[b,m][[2,1]]]ˆ2},
{b, Abs[((gs0up+gs0dn)/Sqrt[2]).gs[b,m][[2,1]]]ˆ2},
{b, Abs[((gs0up-gs0dn)/Sqrt[2]).gs[b,m][[2,1]]]ˆ2 +
Abs[((gs0up+gs0dn)/Sqrt[2]).gs[b,m][[2,1]]]ˆ2}},
{b, -bmax, bmax, db}] //Transpose]]
74 3 Spin and Angular Momentum

Observations:
• The overlap |ψb |ψ−∞ |2 (red) approaches 1 as b → −∞.
• The overlap |ψb |ψ+∞ |2 (green) approaches 1 as b → +∞.
# #2
# |ψ −|ψ  #
• The overlap #ψb | 0↑ √2 0↓ # (cyan) is mostly negligible.
# #2
# |ψ +|ψ  #
• The overlap #ψb | 0↑ √2 0↓ # (orange) approaches 1 as b → 0.
# #2 # #2
# |ψ −|ψ  # # |ψ +|ψ  #
• The sum of these last two, #ψb | 0↑ √2 0↓ # + #ψb | 0↑ √2 0↓ # = |ψb |ψ0↑ |2 +
|ψb |ψ0↓ |2 (thin black), approaches 1 as b → 0 and is less prone to numerical
noise.
• If you redo this calculation with an odd number of spins, you may find different
|ψ ±|ψ 
overlaps with the 0↑ √2 0↓ asymptotic states. Their sum, however, drawn in
black, should be insensitive to the parity of N .
• For |b|  0.2 the excitation gap (see above) is so small that the calculated ground-
state eigenvector is no longer truly the ground state but becomes mixed with the
first excited state due to numerical inaccuracies. This leads to the jumps in the
orange and cyan curves (notice, however, that their sum, shown in black, is
stable). If you redo this calculation with larger values for m, you may get better
results.
Magnetization

Studying the ground state coefficients list directly is of limited use because of the
large amount of information contained in its numerical representation. We gain more
insight by studying specific observables, for example the magnetizations  Ŝx(k) ,  Ŝ y(k) ,
and  Ŝz(k) . We add the following definition to the With[] clause in In[350]:
(* spin components expectation values *)
Clear[mx,my,mz];
mx[b_?NumericQ, m_Integer /; m >= 1, k_Integer] :=
mx[b, m, k] = With[{g = gs[b,m][[2,1]]},
Re[Conjugate[g].(sx[S, n, Mod[k, n, 1]].g)]];
my[b_?NumericQ, m_Integer /; m >= 1, k_Integer] :=
my[b, m, k] = With[{g = gs[b,m][[2,1]]},
Re[Conjugate[g].(sy[S, n, Mod[k, n, 1]].g)]];
mz[b_?NumericQ, m_Integer /; m >= 1, k_Integer] :=
mz[b, m, k] = With[{g = gs[b,m][[2,1]]},
Re[Conjugate[g].(sz[S, n, Mod[k, n, 1]].g)]];
]

In our transverse Ising model only the x-component of the magnetization is nonzero.
Due to the translational symmetry of the system we can look at the magnetization of
any spin, for example the first one (k = 1): m x (b) (blue) and m z (b) (orange, non-zero
due to numerical inaccuracies)
3.4 Coupled Spin Systems: Ising Model in a Transverse Field … 75

We see that in the phases of large |b|, the spins are almost entirely polarized, while
in the phase |b| < 1 the x-magnetization is roughly proportional to b.
Spin–Spin Fluctuation Correlations

Quantum-mechanical spins always fluctuate around their mean direction. In the


example of Q3.8, the state |S, S points on average along the +z direction in the
sense that  S ˆ = S, S| S|S,
ˆ S = {0, 0, S}; but it fluctuates away from this axis as
 Ŝx  =  Ŝ y  = S/2.
2 2

By introducing the fluctuation operator δS ˆ = Sˆ −  S,


ˆ we can interpret spin
fluctuations through the expectation values δS ˆ = {0, 0, 0} (fluctuations always
average to zero) and (δS) ˆ · δS
ˆ 2  = δS ˆ =  Sˆ · S
ˆ −  S
ˆ ·  S
ˆ = S(S + 1) −  S
ˆ 2 .
Since the spin magnetization has length 0 ≤  S ˆ ≤ S, these fluctuations satisfy
S ≤ (δS)ˆ 2  ≤ S(S + 1): they are positive for every spin state.
When two (or more) spins are present, their quantum-mechanical fluctuations can
become correlated. We quantify such spin–spin fluctuation correlations between two
spins k and k  with the measure
  
(k) (k ) (k) (k ) (k) (k )
Ck,k  ˆ · δS
= δS ˆ  =  Sˆ · Sˆ  −  Sˆ  ·  Sˆ , (3.33)

which has the form of a statistical covariance.11 For any spin length S (assuming

S (k) = S (k ) ), the first term of Eq. (3.33) can be written as

ˆ (k) ˆ (k  )
2 ˆ (k)
2 ˆ (k  )
2
 S + S  −  S  −  S
 
ˆ (k) ˆ (k )  1 ˆ (k) ˆ (k )
2
S · S  = =  S + S  − S(S + 1),
2 2
(3.34)

11 See https://en.wikipedia.org/wiki/Covariance.
76 3 Spin and Angular Momentum

which allows us to predict its expectation value as a function of the total-spin quan-
tum number describing the two spins-S. As this quantum number can be anywhere
ˆ (k) ˆ (k  )
2
between 0 and 2S, we have 0 ≤  S + S  ≤ 2S(2S + 1). This expectation
value is not restricted to integer values. As a result we make the following observa-
tions:
• −S(S + 1) ≤ Ck,k  ≤ S 2 : spin fluctuations can be correlated (Ck,k  > 0), anti-
correlated (Ck,k  < 0), or uncorrelated (Ck,k  = 0).
• The strongest correlations Ck,k  = S 2 are found when the two spins-S form a

ˆ (k) ˆ (k )
joint spin-2S and at the same time are unaligned ( S  ·  S  = 0).
• The strongest anti-correlations Ck,k  = −S(S + 1) are found when the two spins-
S form a joint spin-0 (i.e., a spin-singlet). In this case, the magnetizations always

ˆ (k)
 ˆ (k )

vanish:  S  =  S  = {0, 0, 0}.
For the specific case S = 1/2, which we use in the present calculations, two spins
ˆ (k) ˆ (k  )
2
can form a joint singlet (total spin 0;  S + S  = 0), a joint triplet (total spin
ˆ (k) ˆ (k  )
2 ˆ (k) ˆ (k  )
2
1;  S + S  = 2), or a mixture of these (0 ≤  S + S  ≤ 2), and the
correlation is restricted to the values − 43 ≤ Ck,k  ≤ + 41 for all states. Specific cases
are:
• In the pure joint singlet state |↑↓−|↓↑

2
the correlation is precisely Ck,k  = − 43 . A
fluctuation of one spin implies a counter-fluctuation of the other in order to keep
them anti-aligned and in a spin-0 joint state. Remember that the spin monogamy
theorem states that if spins k and k  form a joint singlet, then both must be
uncorrelated with all other spins in the system.
• In a pure joint triplet state, i.e., any mixture of the states |↑↑, |↓↓, and |↑↓+|↓↑

2
,
the correlation is 0 ≤ Ck,k  ≤ + 14 . A fluctuation of one spin implies a similar
fluctuation of the other in order to keep them aligned and in a spin-1 joint state.
• The maximum correlation Ck,k  = + 41 is reached for unaligned triplet states, i.e.,

ˆ (k)
 ˆ (k )

when  S  ·  S  = 0. Examples include the states |↑↑+|↓↓√
2
, |↑↑−|↓↓

2
, and
|↑↓+|↓↑

2
.
• In the fully parallel triplet states |↑↑ and |↓↓, the magnetizations are aligned but

ˆ (k) ˆ (k ) ˆ (k)
their fluctuations are uncorrelated: Ck,k = 0, and hence  S · S  =  S  ·
 

ˆ (k )
 S .
In order to estimate these spin fluctuation correlations, we add the following definition
to the With[] clause in In[350]:
3.4 Coupled Spin Systems: Ising Model in a Transverse Field … 77

(* spin-spin correlation operator *)


Clear[Cop];
Cop[k1_Integer, k2_Integer] := Cop[k1, k2] =
With[{q1 = Mod[k1,n,1], q2 = Mod[k2,n,1]},
sx[S,n,q1].sx[S,n,q2] + sy[S,n,q1].sy[S,n,q2]
+ sz[S,n,q1].sz[S,n,q2]];
(* spin-spin correlations *)
Clear[c];
c[b_?NumericQ,m_Integer/;m>=1,{k1_Integer,k2_Integer}] :=
c[b,m,{k1,k2}] = With[{g = gs[b,m][[2,1]]},
Re[Conjugate[g].(Cop[k1,k2].g)]-(mx[b,m,k1]*mx[b,m,k2]
+my[b,m,k1]*my[b,m,k2]+mz[b,m,k1]*mz[b,m,k2])];
]

Since our spin ring is translationally invariant, we can simply plot Cδ = C1,1+δ : for
N = 20 and δ = 1 . . . 10 (top to bottom),

Observations:
• The spin fluctuations are maximally correlated (C = + 41 ) for b = 0, in the ferro-
magnetic phase. They are all either pointing up or pointing down, so every spin
is correlated with every other spin; keep in mind that the magnetization vanishes
at the same time (Sect. 3.4.4). It is only the spin–spin interactions that correlate
the spins’ directions and therefore their fluctuations.
• The spin fluctuations are uncorrelated (C → 0) for b → ±∞, in the paramag-
netic phases. They are all pointing in the +x direction for b  1 or in the −x
direction for b  −1, but they are doing so in an independent way and would
keep pointing in that direction even if the spin–spin interactions were switched
off. This means that the fluctuations of the spins’ directions are uncorrelated.
Entropy of Entanglement
We know now that in the limits b → ±∞ the spins are polarized (magnetized) but
their fluctuations are uncorrelated, while close to b = 0 they are unpolarized (unmag-
netized) but their fluctuations are maximally correlated. Here we quantify these cor-
relations with the entropy of entanglement, which measures the entanglement of a
single spin with the rest of the spin chain.
In a system composed of two subsystems A and B, the entropy of entanglement is
defined as the von Neumann entropy of the reduced density matrix (see Sect. 2.4.3),


SAB = − Tr ρ̂A log2 ρ̂A = − λi log2 λi (3.35)
i
78 3 Spin and Angular Momentum

where the λi are the eigenvalues of ρ̂A (or of ρ̂B ; the result is the same). Care must
be taken with the case λi = 0: we find limλ→0 λ log2 λ = 0. For this we define the
function
s[0|0.] = 0;
s[x_] = -x*Log[2, x];

that uses Mathematica’s pattern matching to separate out the special case x = 0. Note
that we use an alternative pattern12 0|0. that matches both an analytic zero 0 and
a numeric zero 0., which Mathematica distinguishes carefully.13
We define the entropy of entanglement of the first spin with the rest of the spin
ring using the definition of In[257], tracing out the last (2S + 1) N −1 degrees of
freedom and leaving only the first 2S + 1 degrees of freedom of the first spin:
EE[S_?SpinQ, \[Psi]_] :=
Total[s /@ Re[Eigenvalues[traceout[\[Psi], -Length[\[Psi]]/(2S+1)]]]]

Observations:
• Entanglement entropies of the known asymptotic ground states:
EE[1/2, (gs0up+gs0dn)/Sqrt[2]]
1
EE[1/2, (gs0up-gs0dn)/Sqrt[2]]
1
EE[1/2, gsplusinf]
0
EE[1/2, gsminusinf]
0

• Entanglement entropy as a function of b: again the calculation is numerically


difficult around b ≈ 0 because of the quasi-degeneracy.
With[{bmax = 3, db = 1/64, m = 2},
ListLinePlot[Table[{b, EE[1/2, gs[b,m][[2,1]]]},
{b, -bmax, bmax, db}], PlotRange -> {0, 1}]]

Notice that the quantum phase transitions at b = ±1 are not visible in this plot.

12 See https://reference.wolfram.com/language/tutorial/PatternsInvolvingAlternatives.html.
13 Experiment: 0==0. yields True (testing for semantic identity), whereas 0===0. yields False

(testing for symbolic identity).


3.4 Coupled Spin Systems: Ising Model in a Transverse Field … 79

3.4.5 Exercises

Q3.15 For S = 1/2, what is the largest value of N for which you can calculate the
ground state of the transverse Ising model at the critical point b = 1?
Q3.16 Study the transverse Ising model with S = 1:
1. At which values of b do you find quantum phase transitions?
2. Characterize the ground state in terms of magnetization, spin–spin corre-
lations, and entanglement entropy.
Q3.17 Study the transverse XY model for S = 1/2:

b (k) (k) (k+1) 


N N
Ĥ = − Ŝz − Ŝx Ŝx + Ŝ y(k) Ŝ y(k+1) (3.36)
2 k=1 k=1

1. Guess the shape of the ground states for b ± ∞ [notice that the first term
in the Hamiltonian of Eq. (3.36) is in the z-direction!] and compare to the
numerical calculations.
2. At which values of b do you find quantum phase transitions?
3. Characterize the ground state in terms of magnetization, spin–spin corre-
lations, and entanglement entropy.
Q3.18 Study the Heisenberg model for S = 1/2:

b (k) ˆ (k) ˆ (k+1)


N N
Ĥ = − Ŝ − S ·S (3.37)
2 k=1 z k=1

1. Guess the shape of the ground states for b ± ∞ [notice that the first term
in the Hamiltonian of Eq. (3.37) is in the z-direction!] and compare to the
numerical calculations.
2. What is the ground-state degeneracy for b = 0?
3. At which values of b do you find quantum phase transitions?
4. Characterize the ground state in terms of magnetization, spin–spin corre-
lations, and entanglement entropy.
Q3.19 Consider two spin-1/2 particles in the triplet state |ψ = |↑↑. Subsystem A
is the first spin, and subsystem B is the second spin.
1. What is the density matrix ρ̂ AB of this system?
2. What is the reduced density matrix ρ̂ A of subsystem A (the first spin)? Is
this a pure state? If yes, what state?
3. What is the reduced density matrix ρ̂ B of subsystem B (the second spin)?
Is this a pure state? If yes, what state?
4. Calculate the von Neumann entropies of ρ̂ AB , ρ̂ A , and ρ̂ B .
80 3 Spin and Angular Momentum

Q3.20 Consider two spin-1/2 particles in the singlet state |ψ = |↑↓−|↓↑

2
. Subsys-
tem A is the first spin, and subsystem B is the second spin.
1. What is the density matrix ρ̂ AB of this system?
2. What is the reduced density matrix ρ̂ A of subsystem A (the first spin)? Is
this a pure state? If yes, what state?
3. What is the reduced density matrix ρ̂ B of subsystem B (the second spin)?
Is this a pure state? If yes, what state?
4. Calculate the von Neumann entropies of ρ̂ AB , ρ̂ A , and ρ̂ B .

3.5 Coupled Spin Systems: Quantum Circuits


[Supplementary Material 5]

The computational structure developed so far in this chapter can be used to simulate
quantum circuits, such as they are used to run quantum algorithms leading all the
way to quantum computers. In its simplest form, a quantum circuit contains a set of
N spin-1/2 quantum objects called qubits, on which a sequence of operations called
quantum gates is executed. In analogy to classical binary logic, the basis states of
the qubits’ Hilbert space are usually denoted as |0 (replacing the spin-1/2 state |↑)
and |1 (replacing |↓).
In this section, we go through the steps of assembling quantum circuits and sim-
ulating their behavior on a classical computer. Naturally, the matrix representation
of quantum gates and circuits constructed here is neither efficient nor desirable for
building an actual quantum computer. It is merely useful for acquiring a detailed
understanding of the workings of quantum circuits and algorithms.
In what follows, we adhere strictly to Chapter 5 of Nielsen&Chuang,14 which
provides many more details of the calculations, as well as further reading for the
interested student.

3.5.1 Quantum Gates

Any quantum circuit can be constructed from a set of simple building blocks, similarly
to a classical digital circuit. These building blocks are canonical quantum gates,15 of
which we implement a useful subset here.
Single-Qubit Gates
Single-qubit gates act on one specific qubit in a set:

14 Michael A. Nielsen and Isaac L. Chuang: Quantum Computation and Quantum Information, 10th

Anniversary Edition, Cambridge University Press, Cambridge, UK (2010).


15 See
https://en.wikipedia.org/wiki/Quantum_logic_gate.
3.5 Coupled Spin Systems: Quantum Circuits … 81

• The Pauli-X gate X acts like σ̂x = |10| + |01| on the desired qubit, and
has no effect on all other qubits. A single-qubit input state |ψin  entering the
gate from the left is transformed into the output state |ψout  = σ̂x |ψin  exiting
the gate towards the right.
• The Pauli-Y gate Y acts like σ̂ y = i|10| − i|01| on the desired qubit, and
has no effect on all other qubits.
• The Pauli-Z gate Z acts like σ̂z = |00| − |11| on the desired qubit, and
has no effect on all other qubits.
• The Hadamard gate H acts like σ̂x√+2σ̂z = |00|+|01|+|10|−|11|

2
on the desired
qubit, and has no effect on all other qubits.
To implement these single-qubit gates in a general way, we proceed as in In[331] by
defining a matrix that represents the operator â acting on the kth qubit in a set of n
qubits:
op[n_Integer, k_Integer, a_] /; 1<=k<=n && Dimensions[a]=={2,2} :=
KroneckerProduct[IdentityMatrix[2ˆ(k-1), SparseArray],
a,
IdentityMatrix[2ˆ(n-k), SparseArray]]

This allows us to define the single-qubit Pauli and Hadamard gates with
{id, \[Sigma]x, \[Sigma]y, \[Sigma]z} = Table[SparseArray[PauliMatrix[i]], {i, 0, 3}];
X[n_Integer, k_Integer] /; 1<=k<=n := op[n, k, \[Sigma]x]
Y[n_Integer, k_Integer] /; 1<=k<=n := op[n, k, \[Sigma]y]
Z[n_Integer, k_Integer] /; 1<=k<=n := op[n, k, \[Sigma]z]
H[n_Integer, k_Integer] /; 1<=k<=n := op[n, k, (\[Sigma]x+\[Sigma]z)/Sqrt[2]]

1+eiφ 1−eiφ
as well as the corresponding rotation operators R̂x (φ) = 2
1 + 2
σx = eiφ/2
e−iφ σ̂x /2 etc. that are also known as phase gates,
RX[n_Integer, k_Integer, \[Phi]_] /; 1<=k<=n :=
op[n, k, (1+Exp[I*\[Phi]])/2*id + (1-Exp[I*\[Phi]])/2*\[Sigma]x]
RY[n_Integer, k_Integer, \[Phi]_] /; 1<=k<=n :=
op[n, k, (1+Exp[I*\[Phi]])/2*id + (1-Exp[I*\[Phi]])/2*\[Sigma]y]
RZ[n_Integer, k_Integer, \[Phi]_] /; 1<=k<=n :=
op[n, k, (1+Exp[I*\[Phi]])/2*id + (1-Exp[I*\[Phi]])/2*\[Sigma]z]

Two-Qubit Gates
Interesting quantum circuits require operations that involve more than one qubit.
The SWAP gate exchanges the state of qubits j and k in a set of n qubits:
j ×
k ×
Without going through complicated considerations over basis-set indices, we con-
( j) ( j)
struct it through the definition SWAP( jk) = (1( j) ⊗ 1(k) + σ̂x ⊗ σ̂x(k) + σ̂ y ⊗
( j)
σ̂ y(k) + σ̂z ⊗ σ̂z(k) )/2 and building on the above Pauli gates:
SWAP[n_Integer, {j_Integer, k_Integer}] /; 1<=j<=n && 1<=k<=n && j!=k :=
(IdentityMatrix[2ˆn, SparseArray] +
X[n,j].X[n,k] + Y[n,j].Y[n,k] + Z[n,j].Z[n,k])/2
82 3 Spin and Angular Momentum

The matrix representation of a two-qubit SWAP takes on the familiar form


SWAP[2, {1,2}] //Normal
{{1, 0, 0, 0},
{0, 0, 1, 0},
{0, 1, 0, 0},
{0, 0, 0, 1}}

The square root of the SWAP gate is also sometimes used, and is defined similarly:
SQRTSWAP[n_Integer, {j_Integer, k_Integer}] /; 1<=j<=n && 1<=k<=n && j!=k :=
(3+I)/4 * IdentityMatrix[2ˆn, SparseArray] +
(1-I)/4 * (X[n,j].X[n,k] + Y[n,j].Y[n,k] + Z[n,j].Z[n,k])

To define the controlled-NOT or CNOT gate, we first make a general definition for
controlled gates. The n-qubit operator CTRL[n,\[Lambda],A] acts like the operator  if all
qubits in the list λ = {i 1 , i 2 , . . . , i k } are in the |1 state, and has no action (acts like
the identity operator on n qubits) if any of the qubits in the list λ are in the |0 state:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
$
k $
k $
k
CTRL = ⎣ |11|(i j ) ⎦ · Â + ⎣1 − |11|(i j ) ⎦ · 1 = 1 + ⎣ |11|(i j ) ⎦ · ( Â − 1)
j=1 j=1 j=1
(3.38)
Its circuit representation is
i1 •
i2 •
··· ···
ik •

· · · Â · · ·

The bracket in the last expression of Eq. (3.38) is constructed with Apply[Dot,
op[n,#,P1]&/@\[Lambda]] that first constructs a list of projection operators |11|(i j ) for
the control
% qubits, and then applies the Dot operator to assemble them into the
product kj=1 |11|(i j ) .

P0 = (id + \[Sigma]z)/2 //SparseArray; (* qubit projector |00| *)


P1 = (id - \[Sigma]z)/2 //SparseArray; (* qubit projector |11| *)
CTRL[n_Integer, \[Lambda]_ /; VectorQ[\[Lambda],IntegerQ], A_] /;
(Unequal@@\[Lambda]) && Min[\[Lambda]]>=1 && Max[\[Lambda]]<=n && Dimensions[A]=={2ˆn,2ˆn} :=
IdentityMatrix[2ˆn, SparseArray] +
Apply[Dot, op[n,#,P1]&/@\[Lambda]].(A - IdentityMatrix[2ˆn, SparseArray])

With this definition, the CNOT operator CNOT( jk) = |00|( j) ⊗ 1(k) + |11|( j) ⊗
σx(k)
j •
k
is simply the CTRL operator with a single element in the list λ = { j} and a single-
qubit  = σ̂x(k) operator,
3.5 Coupled Spin Systems: Quantum Circuits … 83

CNOT[n_Integer, j_Integer -> k_Integer] /; 1<=j<=n && 1<=k<=n && j!=k :=


CTRL[n, {j}, op[n, k, \[Sigma]x]]

Notice that here we use the notation CNOT[n, j->k] to indicate that qubit j
controls qubit k: this arrow notation -> is purely for syntactic beauty and has no
further effects (it is a pattern like any other, with no unintended side effects). The
matrix representation of a two-qubit CNOT takes on the familiar form
CNOT[2, 1->2] //Normal
{{1, 0, 0, 0},
{0, 1, 0, 0},
{0, 0, 0, 1},
{0, 0, 1, 0}}

Three-Qubit Gates
For completeness, we define three-qubit gates that are sometimes useful in the con-
struction of general quantum circuits.
The CCNOT gate or Toffoli gate is a controlled-NOT gate CCNOT[n,{i,j}->k]
with two controlling qubits, i and j, and is defined in analogy to the CNOT gate:
i •
j •
k
CCNOT[n_Integer, {i_Integer, j_Integer} -> k_Integer] /;
1<=i<=n && 1<=j<=n && 1<=k<=n && Unequal[i,j,k] :=
CTRL[n, {i,j}, op[n, k, \[Sigma]x]]

The controlled-SWAP gate or Fredkin gate CSWAP[n, i->{j,k}] conditionally


swaps two qubits, j and k:
i •
j ×
k ×
CSWAP[n_Integer, i_Integer -> {j_Integer, k_Integer}] /;
1<=i<=n && 1<=j<=n && 1<=k<=n && Unequal[i,j,k] :=
CTRL[n, {i}, SWAP[n, {j, k}]]

3.5.2 A Simple Quantum Circuit

As a simple example, we study the quantum circuit


qubit 1: |0 H •
qubit 2: |0
The unitary operation corresponding to this circuit is a Hadamard gate on qubit 1,
followed by a controlled-NOT gate where qubit 1 controls the inversion of qubit 2.
In Mathematica this gate sequence needs to be written from right to left, because the
gates are represented by matrices that will be applied to a state vector on their right:
84 3 Spin and Angular Momentum

S = CNOT[2, 1->2] . H[2, 1];


Normal[S]
{{1/Sqrt[2], 0, 1/Sqrt[2], 0},
{0, 1/Sqrt[2], 0, 1/Sqrt[2]},
{0, 1/Sqrt[2], 0, -1/Sqrt[2]},
{1/Sqrt[2], 0, -1/Sqrt[2], 0}}

The matrix representation of Out[381] refers to the two-qubit basis set B2 =


{|00, |01, |10, |11}, which we can inspect for any number of qubits with
B[n_Integer /; n>=1] := Tuples[{0, 1}, n]
B[2]
{{0,0}, {0,1}, {1,0}, {1,1}}

The input state of our circuit is the product state |ψin  = |0 ⊗ |0 = |00, which is
the first element of B2 :
\[Psi]in = {1,0,0,0};

The output state of our circuit follows from the application of S,


\[Psi]out = S . \[Psi]in
{1/Sqrt[2], 0, 0, 1/Sqrt[2]}

Looking at the basis set B2 we identify this output state with the maximally entangled
state |ψout  = |00+|11

2
. Projective measurements on the two qubits,

qubit 1: |0 H • bit 1


qubit 2: |0 bit 2

give 50% probability of finding the classical result “00” and 50% probability of
finding “11”, whereas the bit combinations “01” and “10” never occur:
Abs[\[Psi]out]ˆ2
{1/2, 0, 0, 1/2}

It is important to recognize that these four probabilities are insufficient to identify


the state |ψout , even if many measurements are made, because any state whose diag-
onal density-matrix elements match Out[386] gives these probabilities. Generally, in
order to identify a two-qubit output state fully, a quantum-state tomography (QST)16
must be performed, which involves applying further phase gates (qubit rotations)
before the projective measurements and measuring all sixteen (fifteen non-trivial)
expectation values ψout |σ̂1 ⊗ σ̂2 |ψout  for σ̂1 , σ̂2 ∈ {1, σ̂x , σ̂ y , σ̂z }, followed by an
inversion procedure to estimate the density matrix17 :

16 Seehttps://en.wikipedia.org/wiki/Quantum_tomography.
17 Equation (3.39) is a direct inversion that may not result in a positive semi-definite density matrix
if experimental noise is present. In such cases, more elaborate inversion procedures are available.
3.5 Coupled Spin Systems: Quantum Circuits … 85

1
ρ̂ = ψout |σ̂1 ⊗ σ̂2 |ψout  · σ̂1 ⊗ σ̂2
4
σ̂1 ∈{1,σ̂x ,σ̂ y ,σ̂z } σ̂2 ∈{1,σ̂x ,σ̂ y ,σ̂z }
⎡ ⎤
11 + 1z + z1 + zz 1x − i1y + zx − izy x1 + x z − iy1 − iyz x x − ix y − iyx − yy
1 ⎢1x + i1y + zx + izy 11 − 1z + z1 − zz x x + ix y − iyx + yy x1 − x z − iy1 + iyz ⎥
= ⎢ ⎥
4 ⎣x1 + x z + iy1 + iyz x x − ix y + iyx + yy 11 + 1z − z1 − zz 1x − i1y − zx + izy ⎦
x x + ix y + iyx − yy x1 − x z + iy1 − iyz 1x + i1y − zx − izy 11 − 1z − z1 + zz
(3.39)

(abbreviating x y = ψout |σ̂x ⊗ σ̂ y |ψout  etc.) A full QST on n qubits requires mea-
suring 4n − 1 such expectation values, which makes the QST infeasible in general.

3.5.3 Application: The Quantum Fourier Transform

The discrete classical Fourier transform18 (CFT) of a list of N complex numbers


x = {x0 , x1 , . . . , x N −1 } is given by the list y = {y0 , y1 , . . . , y N −1 } with elements

N −1
1
yj = √ xk e2πi jk/N . (3.40)
N k=0

It can be seen as a unitary matrix operation



y = F · x with F jk = e2πi jk/N / N . (3.41)

With the Fast Fourier Transform (FFT) algorithm,19 the computational effort of eval-
uating Eq. (3.41) is of order O[N log(N )].
The discrete Quantum Fourier Transform (QFT) is precisely the same trans-
formation, except that the vectors x and y are encoded into quantum states. For
this, a quantum system with Hilbert space dimension
! −1 N is described
!by−1a basis set
{|0, |1, . . . , |N − 1}, and the states |x = Nj=0 x j | j and |y = Nj=0 y j | j are
seen as related by the unitary QFT operator F̂ such that

|y = F̂|x with  j|F̂|k = F jk , (3.42)

in analogy to Eq. (3.41). The idea of this section is that the QFT can be evaluated
much faster than the CFT, even though both are mathematically equivalent.
We assume that N = 2n is an integer power of two.20 The Hilbert space of
n qubits has exactly 2n = N dimensions, and therefore we use these n qubits
to encode the states |x and |y in the following way. The 2n basis states Bn =
{|00 . . . 00, |00 . . . 01, |00 . . . 10, . . . , |11 . . . 11} are, in our usual construction

18 See https://en.wikipedia.org/wiki/Discrete_Fourier_transform.
19 See https://en.wikipedia.org/wiki/Fast_Fourier_transform.
20 For all other cases, choose n as the smallest integer ≥ log (N ) and set x . . . x n
2 N 2 −1 to zero.
86 3 Spin and Angular Momentum

through tensor products (Sect. 2.4.2), listed in increasing order when interpreted
as binary numbers (see In[382]). We give each basis state a new label equal
to this binary number: |00 . . . 00 = |0, |00 . . . 01 = |1, |00 . . . 10 = |2, …,
|11 . . . 11 = |2n − 1, such that the state of the first qubit is the most significant
bit (MSB) of the binary representation of the basis state’s index, and the state of the
nth qubit is the least significant bit (LSB) of the binary representation of the basis
state’s index. What follows below is a quantum circuit operating on these n qubits
that has the effect of the QFT operator F̂, as expressed in this binary basis.
The Quantum Fourier Transform circuit is assembled from single-qubit Hadamard
gates and two-qubit controlled Z -phase gates, where R̂k = R̂z (2π/2k ) = |00| +
k
e2πi/2 |11| using In[369]:

1 H R2 R3 · · · Rn−1 Rn ··· ··· × 1


2 • ··· H R2 · · · Rn−2 Rn−1 · · · × 2
3 • ··· • ··· ··· ×3
··· ··· ··· ···
n−1 ··· • ··· • ··· H R2 × n−1
n ··· • ··· • ··· • H × n

To construct the ith dashed block consisting of a Hadamard gate on qubit i followed by
n − i controlled Z -phase gates, we remember that the application of matrix operators
happens from right to left, in the reverse order from that shown in the circit diagram
above. We first construct a list of the controlled RZ operators and contract it by
applying Dot:
QFTblock[n_Integer, i_Integer] /; 1<=i<=n :=
Apply[Dot, Table[CTRL[n, {j}, RZ[n, i, 2\[Pi]/2ˆ(j+1-i)]], {j, n, i+1, -1}]].
H[n,i]

We assemble the n-qubit QFT operator from these dashed QFTblock blocks and
a set of SWAP operations that reverses the qubit order,
QFT[n_Integer] /; n>=1 :=
Apply[Dot, Table[SWAP[n, {i, n+1-i}], {i, 1, n/2}]].
Apply[Dot, Table[QFTblock[n, i], {i, n, 1, -1}]]

The matrix representation of this QFT operator is a 2n × 2n matix with element ( j, k)


given by 2−n/2 e2πi jk/2 , precisely as expected from Eq. (3.42) with N = 2n . We check
n

this relation for n = 1 . . . 6 with


Table[QFT[n] == 2ˆ(-n/2)*Table[Exp[2\[Pi]*I*j*k/2ˆn], {j,0,2ˆn-1}, {k,0,2ˆn-1}],
{n, 6}] //FullSimplify
{True, True, True, True, True, True}

The resources used to construct this quantum circuit are


• n Hadamard gates,
• n(n−1)
2
controlled Z -phase gates, and
• n/2 swap gates.
3.5 Coupled Spin Systems: Quantum Circuits … 87

In the present classical simulation of quantum circuits, each quantum gate is a sparse
2n × 2n matrix, usually containing O(2n ) nonzero matrix elements; applying such
a simulated gate to a state therefore takes O(2n ) time, which makes the simulated
QFT no faster than the classical FFT, which scales as O(2n n). However, if we can
construct a physical system in which these gates can be applied in a time that scales at
most polynomially with n, then the QFT is a massive improvement over the scaling
of the classical FFT. The development of such physical qubit/gate systems is the
focus of much ongoing scientific research.

3.5.4 Application: Quantum Phase Estimation

The QFT circuit of Sect. 3.5.3 cannot be used by itself in practice, because it requires
the preparation of an arbitrary quantum state containing an exponential number of
parameters x j , as well as a full quantum-state tomography to read out an exponential
number of parameters y j describing the final state (see Sect. 3.5.2). In this section
we study a quantum circuit that uses the QFT as a component, and circumvents these
exponential input/output bottlenecks.
Unitary matrices have eigenvalues that are of unit norm, and can be written as
e2πiϕ with ϕ ∈ R. The question addressed here is: given a unitary operator Ûϕ and
an eigenstate |u such that Ûϕ |u = e2πiϕ |u, can we estimate ϕ efficiently, that is,
to t binary digits with an effort that scales polynomially with t?
The answer is yes, using the following quantum circuit that makes use of the
Quantum Fourier Transform of Sect. 3.5.3 but (i) starts with an initial product state
that can be prepared with O(t) effort, and (ii) does not require a full quantum state
tomography, but instead finishes with a simple projective measurement that takes
O(t) effort.

|0 H ··· • MSB: weight 2t−1


··· ··· ··· ..
.
|0 H • ··· bit 3: weight 4 = 22
F†
|0 H • ··· bit 2: weight 2 = 21
|0 H • ··· LSB: weight 1 = 20
|u / Uϕ Uϕ2 Uϕ4 · · · Uϕ2 / |u (not measured)
t−1

To set up a quantum phase estimation in Mathematica, we begin by defining the


unitary operator Ûϕ and its eigenstate |u with
u = {1};
U[\[Phi]_] = {{Exp[2\[Pi]*I*\[Phi]]}};

and check that they satisfy Ûϕ |u = e2πiϕ |u and u|u = 1:
88 3 Spin and Angular Momentum

{U[\[Phi]].u === Eˆ(2\[Pi]*I*\[Phi])*u, Norm[u] == 1}


{True, True}

Here we use a one-dimensional quantum system: the operator Ûϕ is a 1 × 1 matrix,


and the state |u is a list of length 1. More generally, the Hilbert space of the system
under test (SUT) can be arbitrarily large (see Q3.22), and more complex quantum
circuits can be substituted for Û in more elaborate experiments.
In order to construct the phase estimation circuit, we will also need the unit
operator acting on the SUT:
U0 = IdentityMatrix[Length[u], SparseArray];

The controlled version of the Ûϕ operator, where the ith qubit out of a set of n
qubits controls the application of Ûϕ to the SUT, is Ûϕ(i) = |00|(i) ⊗ 1 + |11|(i)
⊗ Ûϕ . We use the tensor-product techniques of Sect. 2.4.2 to couple the qubits to the
SUT:
CTRLU[n_Integer, i_Integer, \[Phi]_] /; 1<=i<=n :=
KroneckerProduct[op[n,i,P0], U0] + KroneckerProduct[op[n,i,P1], U[\[Phi]]]

The initial state of the phase estimation circuit is |ψ0  = |0⊗t ⊗ |u. We know that
the state |0⊗t = |00 . . . 00 is the first basis state in the computational basis Bn
(eigen-basis of σ̂z ), and construct it with SparseArray[1->1, 2^t]. As an
example, we work with t = 4 qubits here:
t = 4;
\[Psi]0 = Flatten[KroneckerProduct[SparseArray[1->1, 2ˆt], u]] //Normal;

Applying a Hadamard gate to each qubit gives the state


\[Psi]1 = KroneckerProduct[Apply[Dot, Table[H[t, i], {i, t}]], U0] . \[Psi]0;

Applying the controlled Ûϕm = Ûmϕ operations sequentially then gives the state
\[Psi]2[\[Phi]_] = Apply[Dot, Table[CTRLU[t, i, 2ˆ(t-i)*\[Phi]], {i, t, 1, -1}]] . \[Psi]1;

Finally, an inverse QFT yields the phase-estimation state |εϕ . Remember that the
QFT is a unitary operation, and therefore its inverse is its Hermitian conjugate:
\[Epsilon][\[Phi]_] = KroneckerProduct[ConjugateTranspose[QFT[t]], U0] . \[Psi]2[\[Phi]];

We use the techniques of Sect. 2.4.3, in particular In[257], to drop the component
|u at the end of the quantum circuit and find the reduced density matrix of the qubits.
The diagonal elements of this reduced density matrix are the probabilities of finding
the various basis states of Bn in a projective measurement as shown on the right of
the above circuit:
prob[\[Phi]_?NumericQ] := Re[Diagonal[traceout[\[Epsilon][N[\[Phi]]], -Length[u]]]]
3.5 Coupled Spin Systems: Quantum Circuits … 89

The first element of prob[\[Phi]] gives the probability of measurement outcomes


{0, 0, 0, 0}, that is, the probability that the qubits are in the joint state |0000 =
|0 ⊗ |0 ⊗ |0 ⊗ |0. The second element of prob[\[Phi]] gives the probability of mea-
surement outcomes {0, 0, 0, 1}, that is, the probability that the qubits are in the joint
state |0001 = |0 ⊗ |0 ⊗ |0 ⊗ |1. And so forth: the jth element of prob[\[Phi]]
gives the probability of measurement outcomes corresponding to the binary repre-
sentation of j − 1.
The trick of this phase-estimation quantum circuit is that the information on ϕ
is contained in the state |εϕ  in a way that can be extracted from these probabilities
without doing a full quantum-state tomography. We get an idea of what this means
by looking at the probabilities for the different measurement outcomes when ϕ is an
integer multiple of 2−t :
Table[prob[\[Phi]], {\[Phi], 0, 1, 2ˆ(-t)}] //Chop
{{1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 1., 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 1., 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 1., 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 1., 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1., 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1., 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1., 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1., 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1., 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.},
{1., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}

Whenever ϕ is an integer multiple of 2−t = 1/16, we find that only one basis state is
occupied, and therefore the outcomes of the projective measurements on the 4 qubits
always give the same results, with no quantum fluctuation. A single projective mea-
surement of all 4 qubits can be interpreted as a binary number j ∈ {0, 1, 2, . . . , 15}
that is related to the phase estimate as ϕ = j/16; no quantum-state tomography is
required.
What happens when ϕ is not an integer multiple of 2−t ? It turns out that the
basis state corresponding to the nearest integer multiple of 2−t will be found most
frequently in the projective measurements. For example, for ϕ = 0.2 the probabilities
for projecting |ε0.2  into the 16 basis states are
prob[0.2]
{0., 0.01, 0.02, 0.88, 0.06, 0.01, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.}

(rounded here to two decimals). The fourth basis state, which is |0011 corresponding
to ϕ = 3/16, will be found in about 88% of all experiments, and so a plurality vote
most likely yields ϕ ≈ 3/16 = 0.1875 as a fair estimate of the phase, with an upper
bound on the error of 2−t−1 = 1/32; no quantum-state tomography required. We
extract the expected plurality-vote winner of a large number of experiments with
mostprobable[\[Phi]_?NumericQ] := (Ordering[prob[\[Phi]], -1][[1]] - 1)/2ˆt
90 3 Spin and Angular Momentum

where the Ordering function is used to give the position of the largest element:
mostprobable[0.2]
3/16

It can be shown that mostprobable[\[Phi]]==Mod[Round[\[Phi], 2^(-t)], 1].


Increasing the number of qubits t results in more precise estimates, while keeping
the circuit complexity at O(t 2 ).

3.5.5 Exercises

Q3.21 For the output state |ψout  of Out[386], calculate all expectation values nec-
essary to fill in Eq. (3.39). √
Q3.22 Multi-dimensional phase estimation: set u = {1, 1}/ 2 and Ûϕ = e2πiϕ
{{1, 0}, {0, 1}} (two-dimensional system under test) and show that the phase-
estimation algorithm still works. √
Q3.23 What happens if |u is not an eigenstate of Ûϕ ? Set u = {1, 1}/ 2 and Ûϕ =
{{e2πiϕ , 0}, {0, e4πiϕ }} (two-dimensional system with two different evolution
frequencies) and re-evaluate the attached Mathematica script. Plot prob[\[Phi]]
for a range of frequencies ϕ using ListDensityPlot and interpret the
resulting figure.
Chapter 4
Quantum Motion in Real Space

So far we have studied the quantum formalism in the abstract (Chap. 2) and in the
context of rotational dynamics (Chap. 3). In this chapter we work with the spatial
motion of point particles, which represents a kind of mechanics that is much closer to
our everyday experience. Here, quantum states are called wavefunctions and depend

Electronic supplementary material The online version of this chapter


(https://doi.org/10.1007/978-981-13-7588-0_4) contains supplementary material, which is
available to authorized users.

© Springer Nature Singapore Pte Ltd. 2020 91


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0_4
92 4 Quantum Motion in Real Space

on the spatial coordinate(s). This apparent difference to the material covered in the
previous chapters disappears when we express all wavefunctions in a basis set. We
develop numerical methods for studying spatial dynamics that stay as close to a
real-space description as quantum mechanics allows.

4.1 One Particle in One Dimension

A single particle moving in one dimension is governed by a Hamiltonian of the form

Ĥ = T̂ + V̂ (4.1)

in terms of the kinetic operator T̂ and the potential operator V̂ . These operators are
usually expressed in the Dirac position basis set {|x}x∈R ,1 which diagonalizes the
 ∞the sense that x̂|x = x|x, is ortho-normalized x|y = δ(x −
2
position operator in
y), and complete −∞ |xx|dx = 1. Using this Dirac basis, the explicit expressions
for the operators in the Hamiltonian are
  ∞
2 ∞ d2
T̂ = − dx|x 2 x|, V̂ = dx|xV (x)x|, (4.2)
2m −∞ dx −∞

where m is the particle’s mass and V (x) is its potential. Single-particle states |ψ, on
the other hand, are written in this basis as
 ∞
|ψ = dxψ(x)|x, (4.3)
−∞

where ψ(x) = x|ψ is the wavefunction.


In what follows we restrict the freedom of the particle to a domain x ∈  =
[0, a], where a can be very large in order to approximately describe infinite systems
(example: Sect. 4.1.7). This assumes the potential to be


⎨∞ for x ≤ 0
V (x) = W (x) for 0 < x < a (4.4)


∞ for x ≥ a

1 To be exact, the Dirac position basis set spans a space that is much larger than the Hilbert space
of square-integrable smooth functions used in quantum mechanics. This can be seen by noting that
this basis set has an uncountably infinite number of elements |x, while the dimension of the Hilbert
space in question is only countably infinite [see Eq. (4.5) for a countably infinite basis set]. The
underlying problem of the continuum, which quantum mechanics attempts to resolve, is discussed
with some of its philosophical origins and implications by Erwin Schrödinger in his essay “Science
and Humanism” (Cambridge University Press, 1951, ISBN 978-0521575508).
2 This eigenvalue equation is tricky: remember that x̂ is an operator, |x is a state, and x is a real

number.
4.1 One Particle in One Dimension 93

This restriction is necessary in order to achieve a finite representation of the system


in a computer.
Exercises
Q4.1 Insert Eqs. (4.2) and (4.3) into the time-independent Schrödinger equation
Ĥ|ψ = E|ψ. Use the ortho-normality of the Dirac basis to derive the
usual form of the Schrödinger equation for a particle’s wavefunction in 1D:
2 
− 2m ψ (x) + V (x)ψ(x) = Eψ(x).
Q4.2 Use Eq. (4.3) to show that the scalar product between two states is given by

the usual formula ψ|χ = −∞ ψ ∗ (x)χ(x)dx.

4.1.1 Units

In order to proceed with implementing the Hamiltonian (4.1), we first need a consis-
tent set of units (see Sect. 1.12) in which to express length, time, mass, and energy.
Of these four units, only three are independent: expressions like the classical kinetic
energy E = 21 mv2 indicate a fixed relationship between these four units.
A popular system of units is the International System of Units (SI),3 in which this
consistency is built in:
LengthUnit = Quantity["Meters"]; (* choose freely *)
TimeUnit = Quantity["Seconds"]; (* choose freely *)
MassUnit = Quantity["Kilograms"]; (* choose freely *)
EnergyUnit = MassUnit*LengthUnitˆ2/TimeUnitˆ2 //UnitConvert;

The consistency of this set of definitions is seen in In[408], making the energy
unit depend on the other units, which in turn can be chosen freely. Many other
combinations are possible, as long as this consistency remains.
Another popular choice is to additionally couple the time and energy units through
Planck’s constant , and make both dependent on the length and mass units (thus
reducing the system of units to only two degrees of freedom):
LengthUnit = Quantity["Meters"]; (* choose freely *)
MassUnit = Quantity["Kilograms"]; (* choose freely *)
TimeUnit =
MassUnit*LengthUnitˆ2/Quantity["ReducedPlanckConstant"] //UnitConvert;
EnergyUnit = Quantity["ReducedPlanckConstant"]/TimeUnit //UnitConvert;

This latter set of units is what we will be using in what follows, without restriction
of generality. We express the reduced Planck constant in these units with
\[HBar] = Quantity["ReducedPlanckConstant"]/(EnergyUnit*TimeUnit) //UnitConvert //N
1.

which is equal to unity because of our chosen coupling between energy and time
units; in other unit systems the value will be different. Note the use of //N at the

3 See https://en.wikipedia.org/wiki/International_System_of_Units.
94 4 Quantum Motion in Real Space

end of In[413] to force the result to be a pure machine-precision number instead


of a variable-precision number that tracks the accuracy of the involved physical
quantities.
To set the physical size a of the computational box, for example to a = 5 µm, we
execute
a = Quantity[5, "Micrometers"]/LengthUnit //UnitConvert //N;

and to set the particle’s mass m, for example to the neutron’s mass,
m = Quantity["NeutronMass"]/MassUnit //UnitConvert //N;

In the calculations that follow, we will not be explicit about the system of units
and the physical quantities. Instead, we will use direct dimensionless definitions such
as
a = 30; (* calculation box size in units of length *)
m = 1; (* particle mass in units of mass *)
\[HBar] = 1; (* value of \[HBar] assuming In[412] *)

These are to be replaced by In[414], In[415], and In[413] in a more concrete physical
situation.

4.1.2 Computational Basis Functions

In order to perform quantum-mechanical calculations of a particle moving in one


dimension, we need a basis set that is more practical than the Dirac basis used to
define the relevant operators and states above. Indeed, Dirac states |x are difficult
to represent in a computer because they are uncountable, densely spaced, and highly
singular.
The most generally useful basis sets for computations are the momentum basis
and the finite-resolution position basis, which we will look at in turn, and which will
be shown to be related to each other by a type-I discrete sine transform.
Momentum Basis
The simplest one-dimensional quantum-mechanical system of the type of Eq. (4.1)
is the infinite square well with W (x) = 0. Its energy eigenstates φn (x) for n =
2 
1, 2, 3, . . . satisfy the Schrödinger equation − 2m φn (x) = E n φn (x) (see Q4.1) and
the boundary conditions φn (0) = φn (a) = 0 necessitated by Eq. (4.4). Their explicit
normalized forms are
 nπx

2
x|n = φn (x) = sin (4.5)
a a

with eigen-energies
n 2 π 2 2
En = . (4.6)
2ma 2
4.1 One Particle in One Dimension 95

We know from the Sturm–Liouville theorem4 that these functions form a complete set
(see Q2.2); further, we can use Mathematica to show that they are ortho-normalized:
\[Phi][a_, n_, x_] = Sqrt[2/a]*Sin[n*\[Pi]*x/a];
Table[Integrate[\[Phi][a,n1,x]*\[Phi][a,n2,x], {x, 0, a}],
{n1, 10}, {n2, 10}] //MatrixForm

2 2
They are eigenstates of the squared momentum operator p̂ 2 = −i dxd = −2 dxd 2 :
n 2 π 2 2
p̂ 2 |n = |n, (4.7)
a2
which we verify with
-\[HBar]ˆ2*D[\[Phi][a,n,x], {x,2}] == (nˆ2*\[Pi]ˆ2*\[HBar]ˆ2)/aˆ2*\[Phi][a,n,x]
True

This makes the kinetic operator T̂ = p̂ 2 /(2m) diagonal in this basis:




n|T̂ |n   = E n δnn  , T̂ = |nE n n|. (4.8)
n=1

However, in general the potential energy, and most other operators that will appear
later, are difficult to express in this momentum basis.
The momentum basis of Eq. (4.5) contains a countably infinite number of basis
functions, which is a great advantage over the uncountably infinite cardinality of
the Dirac basis set. In practical calculations, we restrict the computational basis
to n ∈ {1 . . . n max }, which means that we only consider physical phenomena with
π 2 2 2
excitation energies below E n max = 2ma 2 n max (see Sect. 2.1.1). Here is an example of

what these position-basis functions look like for n max = 10:

n max
Using the approximate completeness of the momentum basis, n=1 |nn| ≈1
(see Sect. 2.1.1), the kinetic Hamiltonian thus becomes
n  n 
max max
n max
n max
T̂ ≈ |nn| T̂ |n  n  | = |nn|T̂ |n  n  | = |nE n n|. (4.9)
n=1 n  =1 n,n  =1 n=1

4 See https://en.wikipedia.org/wiki/Sturm-Liouville_theory.
96 4 Quantum Motion in Real Space

We set up the kinetic Hamiltonian operator as a sparse diagonal matrix in the momen-
tum basis with
nmax = 100;
TM = SparseArray[Band[{1,1}] -> Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];

where n max = 100 was chosen as an example.


Finite-resolution Position Basis
Given an energy-limited momentum basis set {|n}nn=1
max
from above, we define a set
of n max equally-spaced points
xj = j ·  (4.10)

for j ∈ {1 . . . n max }, with spacing  = a/(n max + 1). These grid points fill the calcu-
lation range x ∈ [0, a] uniformly without covering the end points. We then define a
new basis set as the closest possible representations of delta-functions at these points:
for j ∈ {1 . . . n max },
√ n max
| j =  φn (x j )|n. (4.11)
n=1

The spatial wavefunctions of these basis states are


√ n max
x| j = ϑ j (x) =  φn (x j )φn (x). (4.12)
n=1

Here is an example of what these position-basis functions look like for n max = 10:

This new basis set is also ortho-normal,  j| j   = δ j j  , and it is strongly local in


the sense that only the basis function ϑ j (x) is nonzero at x j , while all others vanish:

x j  | j = ϑ j (x j  ) = δ j j  / . (4.13)

We define these basis functions in Mathematica with


nmax = 10;
\[CapitalDelta] = a/(nmax+1);
xx[j_] = j*\[CapitalDelta];
\[Theta][j_, x_] = Sqrt[\[CapitalDelta]]*Sum[\[Phi][n,xx[j]]*\[Phi][n,x], {n, nmax}];
4.1 One Particle in One Dimension 97

Since the basis function ϑ j (x) = \[Theta][j,x] is the only one which is nonzero at x j =
xx[j], and it is close to zero everywhere else (exactly zero at the x j  = j ), we can
usually make several approximations:

namaxwavefunction is given as a vector v = v in the position basis, |ψ =


• If
j=1 v j | j, then by Eq. (4.13) the wavefunction is known at the grid points:


n max
√ vj
n max
n max
ψ(x j ) = x j |ψ = x j | v j  | j  =
v j δ j j /  = √ . v j  x j | j   =
j  =1 j  =1 j  =1

(4.14)
The density profile is thus given by the values of ρ(x j ) = |ψ(x j )|2 = |v j |2 /.
This allows for very easy plotting of wavefunctions and densities by linearly
interpolating between these grid points (i.e., an interpolation of order 1):
ListLinePlot[Transpose[{Table[xx[j], {j, nmax}], Abs[v]ˆ2/\[CapitalDelta]}]]

By the truncation of the basis at n max , the wavefunction has no frequency compo-
nents faster than one half-wave per grid-point spacing, and therefore we can be
sure that this linear interpolation is a reasonably accurate representation of the
wavefunction ψ(x) and the density ρ(x) = |x|ψ|2 , in particular as n max → ∞.
• An even simpler interpolation of order zero assumes that the wavefunction is
constant over intervals [−/2, /2] centered at each grid point. This primitive
interpolation is used, for example, to calculate the Wigner quasi-probability
distribution in Sect. 4.1.8.  
• Similarly, if a density operator is given by ρ̂ = nj,max j  =1 R j, j  | j j |, then the
value of the density operator at a grid point (x j , x j ) is given by


⎡ ⎤
n
max n
max
ρ(x j , x j  ) = x j |ρ̂|x j   = x j | ⎣ R j  , j  | j   j  |⎦ |x j  = R j  , j  x j | j   j  |x j  
j  , j  =1 j  , j  =1
n
max δ j, j  δ j  , j  R j, j 
= R j  , j  √ √ = .
  
j  , j  =1
(4.15)

That is, the coefficients R j, j  and the density values ρ(x j , x j  ) are very closely
related. The diagonal elements of this expression ( j = j  ) give the spatial density
profile.
• For any function f (x) that varies slowly (smoothly) over length scales of the
grid spacing , we can make the approximation
f (x)ϑ j (x) ≈ f (x j )ϑ j (x). (4.16)

This approximation becomes exact on every grid point according to Eq. (4.13),
and the assumed smoothness of f (x) makes it a good estimate for any x.
Conversion Between Basis Sets
Within the approximation of a truncation at maximum energy E n max , we can express
any wavefunction |ψ in both basis sets of Eqs. (4.5) and (4.12):
98 4 Quantum Motion in Real Space


n max
n max
|ψ = u n |n = v j | j (4.17)
n=1 j=1

Inserting the definition of Eq. (4.11) into Eq. (4.17) we find


  n ⎡ ⎤

n max
n max
√ n max
max

n max
u n |n = vj  φn  (x j )|n   = ⎣  v j φn  (x j )⎦ |n  
n=1 j=1 n  =1 n  =1 j=1
(4.18)
and therefore, since the basis set {|n} is ortho-normalized,

√ n max
n max
un =  v j φn (x j ) = Xnj v j (4.19)
j=1 j=1

with the basis conversion coefficients


    
√ a 2 nπx
2 πn j
j
Xnj = n| j = φn (x j ) = sin = sin .
n max + 1 a a n max + 1 n max + 1
n max (4.20)
The inverse transformation is found from |n = j=1  j|n| j inserted into Eq.
(4.17), giving

n max
vj = X n j un (4.21)
n=1

in terms of the same coefficients of Eq. (4.20). Thus the transformations relating
the vectors u (with components u n ) and v (with components v j ) are v = X · u and
u = X · v in terms of the same symmetric orthogonal matrix X with coefficients X n j .
We could calculate these coefficients with
X = Table[Sqrt[2/(nmax+1)]*Sin[\[Pi]*n*j/(nmax+1)], {n, nmax}, {j, nmax}] //N;

but this is not very efficient, especially for large n max .


It turns out that Eqs. (4.19) and (4.21) relate the vectors u and v by a type-I discrete
sine transform (DST-I), which Mathematica can evaluate very efficiently via a fast
Fourier transform.5 Since the DST-I is its own inverse, we can use

5 See https://en.wikipedia.org/wiki/Discrete_sine_transform and https://en.wikipedia.org/wiki/

Fast_Fourier_transform. The precise meaning of the DST-I can be seen from its
equivalent definition through a standard discrete Fourier transform of doubled length:
for a complex vector v, we can substitute FourierDST[v,1] by DST1[v_?Ve-
ctorQ]:=-I*Fourier[Join[{0},v,{0},Reverse[-v]]][[2;;Length[v]+1]].
In this sense it is the discrete Fourier transform of a list v augmented with (i) zero boundary
conditions and (ii) reflection anti-symmetry at the boundaries. Remember that the Fourier[]
transform assumes periodic boundary conditions, which are incorrect in the present setup, and
need to be modified into a DST-I.
4.1 One Particle in One Dimension 99

v = FourierDST[u, 1];
u = FourierDST[v, 1];

to effect such conversions. We will see a very useful application of this transformation
when we study the time-dependent behavior of a particle in a potential (“split-step
method”, Sect. 4.1.9).
The matrix X is also useful for converting operator representations between the
basis sets: the momentum representation U and the position representation V of the
same operator satisfy V = X · U · X and U = X · V · X. In practice, as above we
can convert operators between the position and momentum representation with a
two-dimensional type-I discrete sine transform:
V = FourierDST[U, 1];
U = FourierDST[V, 1];

This easy conversion is very useful for the construction of the matrix representations
of Hamiltonian operators, since the kinetic energy is diagonal in the momentum
basis, Eq. (4.7), while the potential energy operator is approximately diagonal in the
position basis, Eq. (4.25).

4.1.3 The Position Operator


∞
The position operator x̂ = −∞ dx|xxx| is one of the basic operators that is used
frequently to construct Hamiltonians of moving particles. The exact expressions for
the matrix elements of this operator in the momentum basis are

⎧a
 a  nπx
    ⎨⎪2 if n = n 
2 2 n πx 
n|x̂|n   = dx sin x sin = − π 2 (n 2 −n 2 )2 if n − n  is odd
8ann
0 a a a a ⎪

0 otherwise
(4.22)

This allows us to construct the exact matrix representations of the operator x̂ in


both the momentum (xM) and the position (xP) bases:
xM = SparseArray[{
Band[{1,1}] -> a/2,
{n1_,n2_} /; OddQ[n1-n2] -> -8*a*n1*n2/(\[Pi]ˆ2*(n1ˆ2-n2ˆ2)ˆ2)},
{nmax,nmax}];
xP = FourierDST[xM, 1];

A simple approximation of the position operator, which will be extremely useful in


what follows, is found by observing that xP is almost a diagonal matrix, with approxi-
mately the grid coordinates x1 , x2 , . . . , xn max on the diagonal. This approximate form
can be proved by using the locality of the position basis functions, Eq. (4.16):
100 4 Quantum Motion in Real Space
 ∞   a  a
x j j  =  j|x̂| j   =  j| dx|xxx| | j   = dx j|xxx| j   = dxϑ∗j (x)xϑ j  (x)
−∞ 0 0
 a
≈ x j dxϑ∗j (x)ϑ j  (x) = δ j j  x j . (4.23)
0

The resulting approximate diagonal form of the position


operator in the position
basis, found from the approximate completeness relation nj=1
max
| j j| ≈ 1, is
⎡ ⎤ ⎡ ⎤
n
max n
max n
max n
max n
max
x̂ ≈ ⎣ | j j|⎦ x̂ ⎣ | j   j  |⎦ = | j j|x̂| j   j  | ≈ | jδ j j  x j  j  | = | jx j  j|.
j=1 j  =1 j, j  =1 j, j  =1 j=1
(4.24)

\[CapitalDelta] = a/(nmax+1); (* the grid spacing *)


xgrid = Range[nmax]*\[CapitalDelta]; (* the computational grid *)
xP = SparseArray[Band[{1,1}] -> xgrid]; (* x operator, position basis *)
xM = FourierDST[xP, 1]; (* x operator, momentum basis *)

4.1.4 The Potential-Energy Operator

If a potential energy function W (x) varies smoothly over length scales of the grid
spacing , then the trick of Sect. 4.1.3 allows us to approximate the matrix elements
of this potential energy in the position basis,

 a   a  a
V j j  =  j|V̂ | j   =  j| dx|xW (x)x| | j   = dx j|xW (x)x| j   = dxϑ∗j (x)W (x)ϑ j  (x)
0 0 0
 a
≈ W (x j  ) dxϑ∗j (x)ϑ j  (x) = δ j j  W (x j ),
0
(4.25)

where we have used the definitions of Eqs. (4.2) and (4.4). This is a massive simpli-
fication compared to the explicit evaluation of potential integrals for each specific
potential energy function. The potential-energy operator thus becomes approximately

⎡ ⎤ ⎡ ⎤
n
max n
max n
max n
max n
max
V̂ ≈ ⎣ | j j|⎦ V̂ ⎣ | j  j |⎦ =
 
| j j|V̂ | j   j  | ≈ | jδ j j  W (x j ) j  | = | jW (x j ) j|.
j=1 j  =1 j, j  =1 j, j  =1 j=1
(4.26)

Wgrid = Map[W, xgrid]; (* the potential on the computational grid *)


VP = SparseArray[Band[{1,1}]->Wgrid]; (* potential operator, position basis *)
VM = FourierDST[VP, 1]; (* potential operator, momentum basis *)
4.1 One Particle in One Dimension 101

4.1.5 The Kinetic-Energy Operator

The representation of the kinetic energy operator can be calculated very accurately
with the description given above. We transform the definition of In[423] to the
finite-resolution position basis with
TP = FourierDST[TM, 1]; (* kinetic operator, position basis *)

For large n max and small excitation energies the exact kinetic-energy operator can be
replaced by the position-basis form


⎨2 if j = j  ,
  2
 j|T̂ | j  ≈ × −1 if | j − j  | = 1, (4.27)
2m2 ⎪

0 if | j − j  | ≥ 2,

which corresponds to replacing the second derivative in the kinetic operator by


the finite-differences expression ψ  (x) ≈ − [2ψ(x) − ψ(x − ) − ψ(x + )] /2 .
While Eq. (4.27) looks simple, it is ill suited for the calculations that will follow
because (i) any matrix exponentials involving T̂ will be difficult to calculate, and
(ii) it is not very accurate (higher-order finite-differences expressions6 are not much
better). Thus we will not be using such approximations in what follows, and prefer
the more useful and more accurate definition through In[423] and In[443].

4.1.6 The Momentum Operator

The discussion has so far been conducted in terms of the kinetic-energy operator T̂ =
p̂ 2 /(2m) without explicitly talking about the momentum operator p̂ = −i dxd . This
was done because the matrix representation of the momentum operator is problematic.
A direct calculation of the matrix elements in the momentum basis yields
 
4inn 

a
dφn  (x)  2 if n − n  is odd,
n| p̂|n  = −i dxφn (x) = × n −n
2
(4.28)
0 dx a 0 if n − n  is even.

In Mathematica, this is implemented with the definition


pM = SparseArray[{n1_,n2_}/;OddQ[n1-n2]->(4*I*\[HBar]*n1*n2)/(a*(n2ˆ2-n1ˆ2)),
{nmax,nmax}]; (* momentum operator, momentum basis *)
pP = FourierDST[pM, 1]; (* momentum operator, position basis *)

This result is, however, unsatisfactory, since (i) it generates a matrix that is not
sparse, and (ii) for a finite basis size n ≤ n max < ∞ it does not exactly generate the

6 See https://en.wikipedia.org/wiki/Finite_difference_coefficient for explicit forms of higher-order


finite-differences expressions that can be used to approximate derivatives.
102 4 Quantum Motion in Real Space

kinetic-energy operator T̂ = p̂ 2 /(2m) (see Q4.3). We will avoid using the momen-
tum operator whenever possible, and use the kinetic-energy operator T̂ instead (see
above). An example of the direct use of p̂ is given in Sect. 5.2.
For large n max and small excitation energies the exact momentum operator can be
replaced by the position-basis form

⎪ 
i ⎨−1 if j − j = −1,

 j| p̂| j  ≈ × +1 if j − j  = +1, (4.29)
2 ⎪⎩
0 if | j − j  | = 1,

which corresponds to replacing the first derivative in the momentum operator by


the finite-differences expression ψ  (x) ≈ [ψ(x + ) − ψ(x − )] /(2). While Eq.
(4.29) looks simple, it is ill suited for the calculations that will follow because any
matrix exponentials involving p̂ will still be difficult to calculate; further, the same
finite-differences caveats as in Sect. 4.1.5 apply. Thus we will not be using such
approximations in what follows, and prefer the more accurate definition through
In[444].
Exercises
Q4.3 Using n max = 100, calculate the matrix representations of the kinetic-energy
operator T̂ and the momentum operator p̂ in the momentum basis. Compare
the spectra of T̂ and p̂ 2 /(2m) and notice the glaring differences, even at low
energies. Hint: use natural units such that a=m=\[HBar]=1 for simplicity.
Q4.4 Using n max = 20, calculate the matrix representations of the position operator
x̂ and the momentum operator p̂ in the momentum basis. To what extent is the
commutation relation [x̂, p̂] = i satisfied? Hint: use natural units such that
a=m=\[HBar]=1 for simplicity.

4.1.7 Example: Gravity Well [Supplementary Material 1]

As an example of a single particle moving in one spatial dimension, we study the


gravity well. This problem can be solved analytically, which helps us to determine
the accuracy of our numerical methods.
We assume that a particle of mass m is free to move in the vertical direction x,
where x = 0 is the earth’s surface and x > 0 is up; the particle is forbidden from
travelling below the earth’s surface (i.e., it is restricted to x > 0 at all times). There
is no dissipation or friction. The Hamiltonian of the particle’s motion is

2 d2
Ĥ = − + mg x̂. (4.30)
2m dx 2

The wavefunction ψ(x) of this particle must satisfy the boundary condition ψ(x) = 0
∀x ≤ 0.
4.1 One Particle in One Dimension 103
2 1/3
In what follows we use the length unit L = m2 g , which is proportional to
the size of the ground state of Eq. (4.30), as well as the mass unit M = m. Fol-
 1/3
lowing Sect. 4.1.1 we then define the time unit T = ML2 / = mg 2 and the
energy unit E = /T = (mg 2 2 )1/3 . These natural units lead to simple expressions

for the parameters: m = Mm
= 1, \[HBar] = ET = 1, and g = L/T g
2 = 1. As a result, we

can set up the Hamiltonian (4.30) without fixing the particle’s mass and gravitational
acceleration explicitly:
m = \[HBar] = g = 1;

Other systems of units can be used in the same way by using the tools of Sect. 4.1.1:
first define a consistent set of units, and then express the physical quantities in terms
of these units. The gravitational acceleration in particular would be set with
g = Quantity["StandardAccelerationOfGravity"]/(LengthUnit/TimeUnitˆ2);

Analytic Quantum Energy Eigenstates


The exact normalized eigenstates and associated energy eigenvalues of Eq. (4.30)
are
⎧  2
1/3 

⎨ 2m 2 g
1/6 Ai αk +x· 2m2 g  2 2 1/3
ψk (x) = 2
· Ai (αk )
if x > 0 E k = −αk · mg 

⎩0 2
ifx ≤ 0
(4.31)

for k = 1, 2, 3, . . . , where Ai(z) = AiryAi[z] is the Airy function, Ai (z) its first
derivative, and αk = AiryAiZero[k] its zeros: α1 ≈ −2.33811, α2 ≈ −4.08795,
α3 ≈ −5.52056, etc.
For comparison to numerical calculations below, we define the exact eigenstates
and eigen-energies with
\[Psi][k_,x_] = (2*mˆ2*g/\[HBar]ˆ2)ˆ(1/6)*AiryAi[AiryAiZero[k]+x*(2*mˆ2*g/\[HBar]ˆ2)ˆ(1/3)]/
AiryAi’[AiryAiZero[k]];
\[Epsilon][k_] = -AiryAiZero[k]*(m*gˆ2*\[HBar]ˆ2/2)ˆ(1/3);

The ground-state energy is, in our chosen energy unit E,


N[\[Epsilon][1]]
1.85576

The lowest three energy eigenstates look thus:


104 4 Quantum Motion in Real Space

Numerical Solution (I): Momentum Basis


Our first numerical attempt to find the ground state of the gravity well relies on the
momentum basis of states |n. For this approach we treat the Hamiltonian of the
problem in the same way as discussed in Chaps. 2 and 3: we express each term of
the Hamiltonian as a matrix in a fixed basis set.
Since our calculation will take place in a finite box x ∈ [0, a], we must choose the
box size a large enough to contain most of the ground-state probability if we want
to calculate it accurately. For the present calculation we choose a = 10L, which is
sufficient (see figure above) since we picked the length unit L similar to the ground-
state size:
a = 10;

We only use a small number of basis functions here, to illustrate the method:
nmax = 12;

The matrix elements of the kinetic energy are set up following In[423]
TM = SparseArray[Band[{1,1}] -> Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];

The matrix elements of the potential energy of Eq. (4.30) are, from In[434],
xM = SparseArray[{
Band[{1,1}] -> a/2,
{n1_,n2_} /; OddQ[n1-n2] -> -8*a*n1*n2/(\[Pi]ˆ2*(n1ˆ2-n2ˆ2)ˆ2)},
{nmax,nmax}];
VM = m*g*xM;

The full Hamiltonian in the momentum representation is therefore


HM = TM + VM;

and the ground-state energy and wavefunction coefficients in the momentum repre-
sentation
gsM = -Eigensystem[-N[HM], 1,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}];

The ground state energy is


gsM[[1, 1]]
1.85608

very close to the exact result of Out[450].


The ground state wavefunction is defined as a sum over basis functions,
\[Phi][n_, x_] = Sqrt[2/a]*Sin[n*\[Pi]*x/a];
\[Psi]0[x_] = gsM[[2,1]] . Table[\[Phi][n, x], {n, nmax}];

We can calculate the overlap of this numerical ground state with the exact one given
in In[448], |ψ0 |ψ1 |2 :
Abs[NIntegrate[\[Psi]0[x]*\[Psi][1,x], {x, 0, a}]]ˆ2
0.999965
4.1 One Particle in One Dimension 105

Even for n max = 12 this overlap is already very close to unity in magnitude. It quickly
approaches unity as n max increases, with the mismatch decreasing as n −9 max for this
specific system. The numerically calculated ground-state energy approaches the exact
result from above, with the mismatch decreasing as n −7 max for this specific system.
These convergence properties, discussed in Sect. 2.1.1, are very general and allow
us to extrapolate many quantities to n max → ∞ by polynomial fits of numerically
calculated quantities as functions of n max .
Numerical Solution (II): Mixed Basis
The numerical method outlined above only works because we have an analytic expres-
sion for the matrix elements of the potential operator V̂ = mg x̂, given in Eq. (4.22).
For a more general potential, the method of Eq. (4.26) is more useful, albeit less
accurate. Here we re-do the numerical ground-state calculation in the position basis.
The computation is set up in the same way as above,
a = 10;
nmax = 12;
\[CapitalDelta] = a/(nmax+1); (* grid spacing *)
xgrid = Range[nmax]*\[CapitalDelta]; (* the computational grid *)

The matrix elements of the kinetic-energy operator in the position basis are calculated
with a discrete sine transform,
TM = SparseArray[Band[{1,1}] -> Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];
TP = FourierDST[TM, 1];

The matrix elements of the potential energy of in Eq. (4.30) are, from In[440],
W[x_] = m*g*x; (* the potential function *)
Wgrid = Map[W, xgrid];(* the potential on the computational grid *)
VP = SparseArray[Band[{1,1}] -> Wgrid];

The full Hamiltonian in the position representation is therefore


HP = TP + VP;

and the ground-state energy and wavefunction coefficients in the position represen-
tation
gsP = -Eigensystem[-N[HP], 1,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}];

The ground state energy is now less close to the exact value than before, due to the
additional approximation of Eq. (4.26):
gsP[[1, 1]]
1.86372

We therefore need a larger n max to achieve the same accuracy as in the first
numerical calculation. The great advantage of the present calculation is, however,
that it is easily generalized to arbitrary potential-energy functions in In[468].
As shown in In[428], the wavefunction can be plotted approximately with
\[Gamma] = Join[{{0,0}}, Transpose[{xgrid, gsP[[2,1]]/Sqrt[\[CapitalDelta]]}], {{a,0}}];
ListLinePlot[\[Gamma]]
106 4 Quantum Motion in Real Space

where we have “manually” added the known boundary values γ(0) = γ(a) = 0 to
the list of numerically calculated wave-function values.

You can see that even with n max = 12 grid points this ground-state wavefunction
(blue lines interpolating between blue calculated points) looks remarkably close to
the exact one (orange line, see plot on page 103).
If we need to go beyond linear interpolation, the precise wavefunction is calculated
by converting to the momentum representation as in In[431] and multiplying with
the basis functions as in In[460]:
\[Phi][n_, x_] = Sqrt[2/a]*Sin[n*\[Pi]*x/a];
\[Psi]0[x_] = FourierDST[gs[[2,1]],1] . Table[\[Phi][n, x], {n, nmax}];

Exercises
Q4.5 What is the probability to find the particle below x = 1 (i.e., below x = L)
when it is in the ground state of the gravity well, Eq. 4.30? Calculate analyti-
cally, with numerical method I, and with numerical method II.
Q4.6 Calculate the mean height x̂ in the ground state of the gravity well. How large
is this quantity for a neutron in earth’s gravitational field? Hint: see Quantum
states of neutrons in the Earth’s gravitational field by Valery V. Nesvizhevsky
et al., Nature 415, pages 297–299 (2002).
Q4.7 Calculate the energy levels and energy eigenstates of a particle in a harmonic
potential, described by the Hamiltonian

2 d2 1
Ĥ = − + mω 2 x̂ 2 . (4.32)
2m dx 2 2
Do the calculated energy levels match the analytically known values?
√ Hint: use
the system of units given in In[409]ff with a length unit of L = /(mω), a
mass unit M = m, and an energy unit E = ω (i.e., the natural units). Choose
the calculation box with size a = 10L and shift the minimum of the harmonic
potential to the center of the calculation box.
4.1 One Particle in One Dimension 107

4.1.8 The Wigner Quasi-probability Distribution


[Supplementary Material 2]

The Wigner quasi-probability distribution7 of a wavefunction ψ(x) is a real-valued


distribution in phase space defined as

1 ∞
W (x, k) = dyψ(x − y)ψ ∗ (x + y)e2iky , (4.33)
π −∞

where k = p/ is the wavenumber, closely related to the momentum but in units of
inverse length. W often makes it easier to interpret wavefunctions than simply plotting
ψ(x), especially when ψ(x) is complex-valued. Time-dependent wavefunctions are
often plotted as Wigner distribution movies, which makes it easier to track a particle
as it moves through phase space. In the classical limit, the time-dependent Wigner
distribution becomes the classical phase-space density that satisfies the Liouville
equation.
For a quick and easy evaluation of the Wigner distribution, we approximate
the wavefunction
√ as piecewise constant, using Eq. (4.13): ψ(x) ≈ ψ(x[x/] ) =
v[x/] / , where we have used the calculation grid spacing  = a/(n max + 1)
and the nearest-integer rounding function [z] = round(z). This approximation will
be valid as long as |k|  π/. Inserting it into Eq. (4.33), and assuming that
x = x j = j is a grid point (i.e., we will only sample the Wigner function on the
spatial grid of the calculation), we can split the integral over y into integrals over
segments of length ,
∞  /2

1
W (x j , k) = dyψ[x j − (m + y)]ψ ∗ [x j + (m + y)]e2ik(m+y)
π
m=−∞ −/2
∞  /2

 /2
1 1
≈ dyψ(x j−m )ψ ∗ (x j+m )e2ik(m+y) = ψ(x j−m ) ψ ∗ (x j+m ) e2ikm dye2iky
π π      
m=−∞ −/2 m=−∞ 
−/2
 
v j−m ∗
v j+m
√ sin(k)/k
 √


sinc(k) max − j)
min( j−1,n
= v j−m v ∗j+m e2ikm ,
π
m=− min( j−1,n max − j)
(4.34)

where sinc(z) = sin(z)/z. The following Mathematica code converts a coefficient


vector v of length n max into a function of the dimensionless momentum κ = a · k
that calculates W (x j , k) for every grid point j = 0, 1, . . . , n max + 1 (including the
boundary grid points that are usually left out of our calculations):
WignerDistribution[v_?VectorQ] := With[{nmax = Length[v]},
Function[\[Kappa], Evaluate[Sinc[\[Kappa]/(nmax+1)]/\[Pi]*Table[
Sum[v[[j-m]]*Conjugate[v[[j+m]]]*Exp[2*I*\[Kappa]*m/(nmax+1)],
{m,-Min[j-1,nmax-j],Min[j-1,nmax-j]}]//Re//ComplexExpand, {j,0,nmax+1}]]]]

7 See https://en.wikipedia.org/wiki/Wigner_quasiprobability_distribution.
108 4 Quantum Motion in Real Space

Notice that this function WignerDistribution returns an anonymous func-


tion (see Sect. 1.5.3) of one parameter, which in turn returns a list of values. As an
example of its use, we make a 2D plot of the Wigner distribution on the interval
x ∈ [xmin , xmax ]8 :
WignerDistributionPlot[Y_,
{xmin_?NumericQ, xmax_?NumericQ} /; xmax > xmin] :=
Module[{nmax, qmax, w, W},
(* number of grid points *)
nmax = Length[Y];
(* calculate the Wigner distribution *)
w = WignerDistribution[Y];
(* evaluate it on the natural dimensionless momentum grid *)
qmax = Floor[nmax/2];
W = Table[w[q*\[Pi]], {q, -qmax, qmax}];
(* make a plot *)
ArrayPlot[W, FrameTicks->Automatic, AspectRatio -> 1/GoldenRatio,
DataRange->{{xmin,xmax},qmax*\[Pi]/(xmax-xmin)*{-1,1}},
ColorFunctionScaling -> False,
ColorFunction -> (Blend[{Blue, White, Red}, (\[Pi]*#+1)/2]&)]]

Notice that we evaluate the Wigner distribution only up to momenta ±n max π/(2a),
which is the Nyquist limit in this finite-resolution system.9 The color scheme is
chosen such that the Wigner distribution values range [− π1 , + π1 ] is mapped onto the
colors blended from blue, white, and red, such that negative Wigner values are shown
in shades of blue while positive values are shown in shades of red.
As an example, we plot the Wigner distribution of the numerical ground-state
wavefunction shown on page 106: on the left, the exact distribution from Eq. (4.33); on
the right, the grid evaluation of In[478] and In[479] (calculated with n max = 40)
with
WignerDistributionPlot[gsP[[2, 1]], {0, a}]

Extension to Density Operators


If the state of the system is not pure, but given as a density matrix ρ(x, x  ) = x|ρ̂|x  
instead of as a wavefunction ψ(x) = x|ψ, then we do not have the option of plotting
the wavefunction and we can only resort to the Wigner distribution for a graphical
representation.
Noticing that Eq. (4.33) contains the term

8 This procedure works for situations other than the usual xmin = 0 and xmax = a.
9 See https://en.wikipedia.org/wiki/Nyquist_frequency.
4.1 One Particle in One Dimension 109

ψ(x − y)ψ ∗ (x + y) = x − y|ψψ|x + y = x − y|ρ̂|x + y = ρ(x − y, x + y)


(4.35)
in terms of the pure-state density operator ρ̂ = |ψψ|, the definition of the Wigner
distribution is generalized to

1 ∞
W (x, k) = dyρ(x − y, x + y)e2iky . (4.36)
π −∞

We can make the same approximations as in Eq. (4.34) to calculate the Wigner
function on a spatial grid point x = x j [see Eq. (4.15)]:

∞  /2

1
W (x j , k) = dyρ[x j − (m + y), x j + (m + y)]e2ik(m+y)
π
m=−∞ −/2
∞  /2

1
≈ dyρ(x j−m , x j+m )e2ik(m+y)
π
m=−∞ −/2

sinc(k) max − j)
min( j−1,n
= R j−m, j+m e2ikm .
π
m=− min( j−1,n max − j)
(4.37)

In analogy to In[478] we define


WignerDistribution[R_ /; MatrixQ[R, NumericQ] &&
Length[R] == Length[Transpose[R]]] :=
With[{n = Length[R]},
Function[k, Evaluate[Sinc[k/(n+1)]/\[Pi]*Table[
Sum[R[[j-m,j+m]]*Exp[2*I*k*m/(n+1)],
{m,-Min[j-1,n-j],Min[j-1,n-j]}]//Re//ComplexExpand, {j,0,n+1}]]]]

For a pure state, the density matrix has the coefficients R j, j  = v j v∗j  , and the def-
initions of In[478] and In[481] thus give exactly the same result if we use

R = KroneckerProduct[v, Conjugate[v]]

In addition, the 2D plotting function of In[479] also works when called with a
density matrix as first parameter.
Exercises
Q4.8 Plot the Wigner distribution of the first excited state of the gravity well. What
do you notice?

4.1.9 1D Dynamics in the Square Well [Supplementary


Material 3]

Assume again a single particle of mass m moving in a one-dimensional potential, with


the time-independent Hamiltonian given in Eq. (4.1). The motion is again restricted
110 4 Quantum Motion in Real Space

to x ∈ [0, a]. We want to study the time-dependent wavefunction ψ(x, t) = x|ψ(t)


given in Eq. (2.34),
 
i(t − t0 )
|ψ(t) = exp − Ĥ |ψ(t0 ). (4.38)


The simplest way of computing this propagation is to express the wavefunction and
the Hamiltonian in a particular basis and use a matrix exponentiation to find the time
dependence of the expansion coefficients of the wavefunction. For example, if we use
the finite-resolution position basis, we have seen on page 105 how to find the matrix
representation of the Hamiltonian, HP. For a given initial wavefunction represented
by a position-basis coefficient vector v0 we can then define
v[\[CapitalDelta]t_?NumericQ] := MatrixExp[-I*HP*\[CapitalDelta]t/\[HBar]].v0

as the propagation over a time interval t = t − t0 . If you try this, you will see that
calculating |ψ(t) in this way is not very efficient, because the matrix exponentiation
is a numerically difficult operation.
A much more efficient method can be found by using the Trotter expansion
λX λX λ3 [X,[X,Y ]]+ λ3 [Y,[X,Y ]] 4 4
− λ [X,[X,[X,Y ]]]− λ16 [X,[Y,[X,Y ]]]− λ
4
eλ(X +Y ) = e 2 eλY e 2 × e 24 12 × e 48 24 [Y,[Y,[X,Y ]]] · · ·
λX λX
≈ e 2 eλY e 2 ,
(4.39)

where the approximation is valid for small λ since the neglected terms are of third
and higher orders in λ (notice that there is no second-order term in λ!). Setting
0)
λ = − i(t−t
M
for some large integer M, as well as X = V̂ and Y = T̂ , we find
 M  M Trotter Eq. (4.39)  
↓ λ V̂ λ V̂ M
|ψ(t) = e MλĤ |ψ(t0 ) = eλĤ |ψ(t0 ) = eλ(T̂ +V̂ ) |ψ(t0 ) = lim e 2 eλT̂ e 2 |ψ(t0 ).
M→∞
(4.40)
This can be evaluated very efficiently. We express the potential Hamiltonian in the
finite-resolution position basis, Eq. (4.26), the kinetic Hamiltonian in the momentum
basis, Eq. (4.9), and the time-dependent wavefunction in both bases of Eq. (4.17):


n max
n max
|ψ(t) = u n (t)|n = v j (t)| j (4.41a)
n=1 j=1


n max
V̂ ≈ W (x j )| j j| (4.41b)
j=1


n max 2 2 2
n π 
T̂ ≈ |nn| (4.41c)
n=1
2ma 2
4.1 One Particle in One Dimension 111

The expansion coefficients of the wavefunction are related by a type-I discrete sine
transform, see Eq. (4.19), Eq. (4.21), In[430], and In[431].
The great advantage of the diagonal matrices of Eq. (4.41b) and Eq. (4.41c) is
that algebra with diagonal matrices is as simple as algebra with scalars, but applied
to the diagonal elements one-by-one. In particular,
  for any diagonal matrix D =
d
j j | j j| the integer matrix powers are D k
= d
j j
k
| j j|, and matrix exponen-
tionals
∞ are calculated by
∞  k exponentiating each diagonal
 ∞ k element  exp(D) =
separately:
k=0 D k
/k! = k=0 ( d
j j | j j|)/k! = j ( k=0 jd /k!)| j j| = j exp(d j )
| j j|. As a result,
λ

n max
λ
e 2 V̂ = e 2 W (x j ) | j j|, (4.42)
j=1

and the action of the potential Hamiltonian thus becomes straightforward:


⎡ ⎤⎡ ⎤
n
max n
max n
max max 
n
λ λ λ λ
e 2 V̂ |ψ(t) = ⎣ e 2 W (x j ) | j j|⎦ ⎣ v j  (t)| j  ⎦ = e 2 W (x j ) v j  (t)| j  j| j   = e 2 W (x j ) v j (t) | j,
     
j=1 j  =1 j, j  =1 j=1
δ j j
vj

(4.43)
which is an element-by-element multiplication of the coefficients of the wavefunction
with the exponentials of the potential—no matrix operations are required. The expan-
sion coefficients (position basis) after propagation with the potential Hamiltonian for
a “time” step λ/2 are therefore
λ
vj = e 2 W (x j ) v j . (4.44)

The action of the kinetic Hamiltonian in the momentum representation is found in


exactly the same way:
n  n  n  
max
n 2 π 2 2
λ 2ma 2
max max 2 π 2 2
λ n 2ma
λT̂ 
e |ψ(t) = e |nn| u n  (t)|n  = e 2
u n (t) |n.
n=1 n  =1 n=1   
u n
(4.45)
The expansion coefficients (momentum basis) after propagation with the kinetic
Hamiltonian for a “time” step λ are therefore
n 2 π 2 2
u n = eλ 2ma 2 un . (4.46)

We know that a type-I discrete sine transform brings the wavefunction from the finite-
resolution position basis to the momentum basis and vice-versa. The propagation
under the kinetic Hamiltonian thus consists of
1. a type-I discrete sine transform to calculate the coefficients v j → u n ,
2. an element-by-element multiplication, Eq. (4.46), to find the coefficients u n →
u n ,
3. and a second type-I discrete sine transform to calculate the coefficients u n → vj .
112 4 Quantum Motion in Real Space

Here we assemble all these pieces into a program that propagates a state |ψ(t0 ),
which is given as a coefficient vector v in the finite-resolution position basis, forward
in time to t = t0 + t. First, for reference, a procedure for the exact propagation by
matrix exponentiation and matrix–vector multiplication, as in In[483]:
VP = SparseArray[Band[{1,1}]->Wgrid];
TM = SparseArray[Band[{1,1}]->Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];
TP = FourierDST[TM, 1];
HP = TP + VP;
propExact[\[CapitalDelta]t_?NumericQ, v0_ /; VectorQ[v0, NumericQ]] :=
MatrixExp[-I*HP*N[\[CapitalDelta]t/\[HBar]]].v0

Next, an iterative procedure that propagates by M small steps via the Trotter
approximation, Eq. (4.39):
propApprox[\[CapitalDelta]t_?NumericQ, M_Integer /; M >= 1,
v0_ /; VectorQ[v0, NumericQ]] :=
Module[{\[Lambda], Ke, Pe2, propKin, propPot2, prop},
(* compute the \[Lambda] constant *)
\[Lambda] = -I*N[\[CapitalDelta]t/(M*\[HBar])];
(* compute the diagonal elements of exp[\[Lambda]*T] *)
Ke = Exp[\[Lambda]*Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];
(* propagate by a full time-step with T *)
propKin[v_] := FourierDST[Ke*FourierDST[v, 1], 1];
(* compute the diagonal elements of exp[\[Lambda]*V/2] *)
Pe2 = Exp[\[Lambda]/2*Wgrid];
(* propagate by a half time-step with V *)
propPot2[v_] := Pe2*v;
(* propagate by a full time-step by H=T+V *)
(* using the Trotter approximation *)
prop[v_] := propPot2[propKin[propPot2[v]]];
(* step-by-step propagation *)
Nest[prop, v0, M]]

Notice that there are no basis functions, integrals, etc. involved in this calculation;
everything is done in terms of the values of the wavefunction on the grid x1 . . . xn max .
This efficient method is called split-step propagation.
The Nest command “nests” a function call: for example, Nest[f,x,3] cal-
culates f ( f ( f (x)). We use this on line 18 of In[489] to repeatedly propagate
by small time steps via the Trotter approximation. Since this algorithm internally
calculates the wavefunction at all the intermediate times t = t0 + M m
(t − t0 ) for m =
1, 2, 3, . . . , M, we can modify our program in order to follow this time evolution. To
achieve this we simply replace the Nest command with NestList, which is simi-
lar to Nest but returns all intermediate results: for example, NestList[f,x,3]
returns the list {x, f (x), f ( f (x)), f ( f ( f (x)))}. We replace last line of the code
above with
Transpose[{Range[0, M]/M*\[CapitalDelta]t, NestList[prop, v0, M]}]]

which now returns a list of pairs containing (i) the time and (ii) the wavefunction at
the corresponding time.
4.1 One Particle in One Dimension 113

Example: Bouncing in the Gravity Well


As an example of particle dynamics, we return to the gravity well of Sect. 4.1.7.
Classically, if we drop a particle from height x0 at t = 0 under the influence of
gravity, then its trajectory
√ is x(t) = x0 − 21 gt 2 , until it reaches the earth’s surface
(x = 0) at time t1 = 2x0 /g. We plot this classical bouncing trajectory for a scaled
starting height x0 = 15 with
With[{x0 = 15, \[CapitalDelta]t = 50}, {t1 = Sqrt[2*x0/g]},
Plot[x0 - Mod[t, 2*t1, -t1]ˆ2/2, {t, 0, \[CapitalDelta]t}]]

In order to simulate a quantum particle bouncing along this trajectory, we start at


the same height x0 = 15 but assume that the particle initially has a wavefunction of
root-mean-square width σ = 1: the initial state in the position basis is
x0 = 15; (* starting height *)
\[Sigma] = 1; (* starting width *)
t1 = Sqrt[2*x0/g]; (* classical bounce time *)
vv = Normalize[N[Exp[-((xgrid-x0)/(2*\[Sigma]))ˆ2]]]; (* starting state *)
ListLinePlot[Join[{{0,0}},Transpose[{xgrid,vv/Sqrt[\[CapitalDelta]]}],{{a,0}}],
PlotRange->All]

We propagate this particle in time for t = 50 time units, using M = 1000 time
steps, and plot the time-dependent density ρ(x, t) = |ψ(x, t)|2 = |x|ψ(t)|2 using
the trick of Eq. (4.13):
With[{\[CapitalDelta]t = 50, M = 1000},
\[Rho] = ArrayPad[Abs[propApprox[\[CapitalDelta]t, M, vv][[All,2]]]ˆ2/\[CapitalDelta], {{0, 0}, {1, 1}}];
ArrayPlot[Reverse[Transpose[\[Rho]]], DataRange -> {{0, \[CapitalDelta]t}, {0, a}}]
114 4 Quantum Motion in Real Space

The orange overlaid curve shows the classical particle trajectory from In[490],
which the quantum particle follows approximately while self-interfering during the
reflections.
To study the correspondence between classical and quantum-mechanical motion
more quantitatively, we can calculate and plot time-dependent quantities such as the
time-dependent mean position: using Eq. (4.24),
⎡ ⎤
n max
n max
x̂(t) = ψ(t)|x̂|ψ(t) ≈ ψ(t)|⎣ | jx j  j|⎦|ψ(t) = x j |v j (t)|2 . (4.47)
j=1 j=1

With[{\[CapitalDelta]t = 50, M = 1000},


ListLinePlot[{#[[1]], Abs[#[[2]]]ˆ2.xgrid} & /@ propApprox[\[CapitalDelta]t, M, vv]]]

Here the quantum deviations from the classical trajectory (orange) become
apparent.
4.1 One Particle in One Dimension 115

4.1.10 1D Dynamics in a Time-Dependent Potential

While the direct propagation of Eq. 2.34 only works for time-independent Hamilto-
nians, the split-step method of In[489] can be extended to time-dependent Hamil-
tonians, in particular to time-dependent potentials W (x, t). For this, we assume that
the potential varies slowly enough in time that it is almost constant during a Trotter
step t/M; this assumption usually becomes exact as M → ∞.
propApprox[Wt_,
\[CapitalDelta]t_?NumericQ, M_Integer /; M >= 1,
v0_ /; VectorQ[v0, NumericQ]] :=
Module[{\[Lambda], Ke, propKin, propPot2, prop},
(* compute the \[Lambda] constant *)
\[Lambda] = -I*N[\[CapitalDelta]t/(M*\[HBar])];
(* compute the diagonal elements of exp[\[Lambda]*T] *)
Ke = Exp[\[Lambda]*Range[nmax]ˆ2*\[Pi]ˆ2/2];
(* propagate by a full time-step with T *)
propKin[v_] := FourierDST[Ke*FourierDST[v, 1], 1];
(* propagate by a half time-step with V *)
(* evaluating the potential at time t *)
propPot2[t_, v_] := Exp[\[Lambda]/2*(Wt[#,t]&/@xgrid)]*v;
(* propagate by a full time-step by H=T+V *)
(* using the Trotter approximation *)
(* starting at time t *)
prop[v_, t_] := propPot2[t+3\[CapitalDelta]t/(4M), propKin[propPot2[t+\[CapitalDelta]t/(4M), v]]];
(* step-by-step propagation *)
Transpose[{Range[0, M]/M*\[CapitalDelta]t, FoldList[prop, v0, Range[0,M-1]/M*\[CapitalDelta]t]}]]

• The definition of propApprox now needs a time-dependent potential Wt[x,t]


that it can evaluate as the propagation proceeds. This potential must be specified
as a pure function with two arguments, as in the example below.
• The exponentials for the potential propagation, calculated once-and-for-all on
line 11 of In[489], are now re-calculated in each call of the propPot2
function.
• In the Trotter propagation step of Eq. 4.40 we evaluate the potential twice in each
propagation interval [t, t + t/M]: once at t + 41 t/M for the first half-step
with the potential operator V̂ , and once at t + 43 t/M for the second half-step.
• On line 19 of In[498] we have replaced NestList by FoldList, which
is more flexible: for example, FoldList[f,x,{a,b,c}] calculates the list
{x, f (x, a), f ( f (x, a), b), f ( f ( f (x, a), b), c)}. By giving the list of propaga-
tion interval starting times as the last argument of FoldList, the prop func-
tion is called repeatedly, with the current interval starting time as the second
argument.

As an example, we calculate the time-dependent density profile under the same


conditions as above, except that the gravitational acceleration is modulated period-
ically: W (x, t) = W (x) · [1 + A · sin(ωt)]. The oscillation frequency ω = π/t1 is
chosen to drive the bouncing particle resonantly and enhance its amplitude. This
time-dependent potential is passed as the first argument to propApprox:
116 4 Quantum Motion in Real Space

With[{A = 0.1, \[Omega] = \[Pi]/t1, \[CapitalDelta]t = 50, M = 1000},


Wt[x_, t_] = W[x]*(1 + A*Sin[\[Omega]*t]);
\[Rho] = ArrayPad[Abs[propApprox[Wt,\[CapitalDelta]t,M,vv][[All,2]]]ˆ2/\[CapitalDelta], {{0, 0}, {1, 1}}];
ArrayPlot[Reverse[Transpose[\[Rho]]], DataRange -> {{0, \[CapitalDelta]t}, {0, a}}]

The increase in bouncing amplitude can be seen clearly in this density plot.
Exercises
Q4.9 Convince yourself that the Trotter expansion of Eq. 4.39 is really necessary,
i.e., that e X +Y = e X eY if X and Y do not commute. Hint: use two concrete
non-commuting objects X and Y , for example two random 2 x 2 matrices as
generated with RandomReal[{0,1},{2,2}].
Q4.10 Given a particle moving in the range x ∈ [0, a] with the Hamiltonian

2 d2
Ĥ = − + W0 sin(10πx/a), (4.48)
2m dx 2
compute its time-dependent wavefunction starting from a “moving Gaus-
(x−a/2)2
sian” ψ(t = 0) ∝ e− 4σ2 eikx with σ = 0.05a and k = 100/a. Study x̂(t)
2
using first W0 = 0 and then W0 = 5000 ma 2 . Hint: use natural units such that

a=m=\[HBar]=1 for simplicity.

4.2 Many Particles in One Dimension: Dynamics


with the Non-linear Schrödinger Equation

The advantage of the split-step evolution of Eq. 4.40 becomes particularly clear when
the system’s energy depends on the wavefunction in a more complicated way than
in the usual time-independent Schrödinger equation. A widely used example is the
4.2 Many Particles in One Dimension: Dynamics … 117

nonlinear energy functional10


 ∞  ∞  ∞
2 ∗  κ
E[ψ] = − dxψ (x)ψ (x) + dx V (x)|ψ(x)| + 2
dx|ψ(x)|4 ,
2m −∞ 2
    −∞    −∞
 
E kin [ψ] E pot [ψ] E int [ψ]
(4.49)
in which the last term describes the mean-field interactions
∞ between N particles
that are all in wavefunction ψ(x) (normalized to −∞ dx|ψ(x)|2 = 1), and which
are therefore in a joint product wavefunction ψ(x)⊗N (see Eq. 2.39). Each particle
sees a potential Vint (x) = κ2 |ψ(x)|2 generated by the average density (N − 1)|ψ(x)|2
of other particles with the same wavefunction, usually through collisional interac-
tions. In three dimensions, the coefficient κ = (N − 1) × 4π2 as /m approximates
the mean-field s-wave scattering between a particle and the (N − 1) other parti-
cles, with s-wave scattering length as (see Sect. 4.4); in the present one-dimensional
example, no such identification is made.
In order to find the ground state
 ∞(energy minimum) of Eq. 4.49 under the constraint
of wavefunction normalization −∞ dx|ψ(x)|2 = 1, we use the Lagrange multiplier11
method: using the Lagrange multiplier μ called the chemical potential, we condition-
ally minimize the energy with respect to the wavefunction by setting its functional
derivative12
  ∞ 
δ 2 
E[ψ] − μ dx|ψ(x)|2 =− ψ (x) + 2V (x)ψ(x) + 2κ|ψ(x)|2 ψ(x) − 2μψ(x) = 0
δψ ∗ (x) −∞ m
(4.50)
to zero. This yields the non-linear Schrödinger equation
 
2 d 2
− + V (x) + κ|ψ(x)| 2
ψ(x) = μψ(x), (4.51)
2m dx 2   
Veff (x)

also called the Gross–Pitaevskii equation for the description of dilute Bose–Einstein
condensates. By analogy to the linear Schrödinger equation, it also has a time-
dependent form for the description of Bose–Einstein condensate dynamics,
 
∂ψ(x, t) 2 ∂ 2
i = − + V (x, t) + κ|ψ(x, t)|2
ψ(x, t). (4.52)
∂t 2m ∂x 2   
Veff (x,t)

For any κ = 0 there is no solution of the form of Eq. 4.38. But the split-step
method of Eq. 4.40 can still be used to simulate Eq. 4.52 because the

10 A functional is an operation that calculates a number from a given function. For example, E[ψ] :
L 2 → R converts a wavefunction ψ ∈ L 2 into an energy E ∈ R. See https://en.wikipedia.org/wiki/
Functional_(mathematics).
11 See https://en.wikipedia.org/wiki/Lagrange_multiplier.
12 See https://en.wikipedia.org/wiki/Functional_derivative.
118 4 Quantum Motion in Real Space

wavefunction-dependent effective potential Veff (x, t) is still diagonal in the posi-


tion representation. We extend the Mathematica code of In[498] by modifying
the propPot2 method to include a non-linear term with prefactor \[Kappa] (added as
an additional argument to the propApprox√function), and do not forget that the
wavefunction at grid point x j is ψ(x j ) = v j / :
propApprox[Wt_, \[Kappa]_?NumericQ, \[CapitalDelta]t_?NumericQ, M_Integer /; M >= 1,
v0_ /; VectorQ[v0, NumericQ]] :=

and
propPot2[t_, v_] := Exp[\[Lambda]/2*((Wt[#,t]&/@xgrid) + \[Kappa]*Abs[v]ˆ2/\[CapitalDelta])]*v;

As an example, we plot the time-dependent density for the time-independent gravita-


tional well W (x, t) = mgx and κ = −3 · (g4 /m)1/3 (attractive interaction), κ = 0
(no interaction), κ = +3 · (g4 /m)1/3 (repulsive interaction):
With[{\[Kappa] = -3 * (g*\[HBar]ˆ4/m)ˆ(1/3), \[CapitalDelta]t = 50, M = 10ˆ3},
\[Rho] = ArrayPad[Abs[propApprox[W[#1]&,\[Kappa],\[CapitalDelta]t,M,vv][[All,2]]]ˆ2/\[CapitalDelta],{{0,0},{1,1}}];
ArrayPlot[Reverse[Transpose[\[Rho]]], DataRange -> {{0, \[CapitalDelta]t}, {0, a}}]

Observations:
• The noninteractive case (κ = 0) shows a slow broadening and decoherence of
the wavepacket.
• Attractive interactions (κ < 0) make the wavepacket collapse to a tight spot and
bounce almost like a classical particle.
• Repulsive interactions (κ > 0) make the wavepacket broader, which slows down
its decoherence.
Exercises
Q4.11 Dimensionless problem (a=m=\[HBar]=1): Given a particle moving in the range
x ∈ [0, 1] with the non-linear Hamiltonian
⎡! "2 ⎤2
1 d2 x − 21
Ĥ = − + ⎣ − 1⎦ +κ|ψ(x)|2 , (4.53)
2 dx 2 δ
  
W (x)

do the following calculations:


4.2 Many Particles in One Dimension: Dynamics … 119

1. Plot the potential W (x) for  = 1 and δ = 41 (use κ = 0). What are the main
characteristics of this potential? Hint: compute W ( 21 ), W  ( 21 ), W ( 21 ± δ),
W  ( 21 ± δ).
2. Calculate and plot the time-dependent density |ψ(x, t)|2 for  = 250, δ = 41 ,
 2
and κ = 0, starting from ψ0 (x) ∝ exp − x−x 2σ
0
with x0 = 0,2694 and
σ = 0, 0554. Calculate the probabilities for finding the particle in the left
half (x < 21 ) and in the right half (x > 21 ) up to t = 20. What do you observe?
3. What do you observe for κ = 0.5? Why?

4.2.1 Imaginary-Time Propagation for Finding the Ground


State of the Non-linear Schrödinger Equation
[Supplementary Material 4]

In the previous section we have looked at the dynamical evolution of a Bose–Einstein


condensate with the time-dependent nonlinear Schrödinger Eq. 4.52, which could
be performed with minimal modifications to previous Mathematica code. The time-
independent nonlinear Schrödinger Eq. 4.51, on the other hand, seems at first sight
inaccessible to our established methods: it is an operator eigenvalue equation, with
the operator acting non-linearly on the wavefunction and thus invalidating the matrix
diagonalization method of Sect. 2.2. How can we determine the ground state of Eq.
4.51?
You may remember from statistical mechanics that at temperature T , the density
operator of a system governed by a Hamiltonian Ĥ is

e−β Ĥ
ρ̂(β) = (4.54)
Z (β)

with β = 1/(kB T ) the reciprocal temperature in terms of the Boltzmann constant


kB = 1.3806488(13) × 10−23 J/K . The partition function Z (β) = Tr e−β Ĥ ensures
that the density operator has the correct norm, Tr ρ̂(β) = 1.
We know that at zero temperature the system will be in its ground state |γ,13

lim ρ̂(β) = |γγ|. (4.55)


β→∞

If we multiply this equation by an arbitrary state |ψ from the right, we find

lim ρ̂(β)|ψ = |γγ|ψ. (4.56)


β→∞

13 For simplicity we assume here that the ground state is non-degenerate.


120 4 Quantum Motion in Real Space

Assuming that γ|ψ = 0 (which is true for almost all states |ψ), the ground state
is therefore

limβ→∞ ρ̂(β)|ψ 1 1
|γ = = lim × e−β Ĥ |ψ. (4.57)
γ|ψ γ|ψ β→∞ Z (β)

This means that if we take almost any state |ψ and calculate limβ→∞ Z (β) 1
e−β Ĥ |ψ,
1 1
we find a state that is proportional to the ground state (the prefactors γ|ψ and Z (β)
are merely scalar prefactors). But we already know how to do this: the wavefunction
e−β Ĥ |ψ is calculated from |ψ by imaginary-time propagation. In fact the split-step
algorithm of Sect. 4.1.9 remains valid if we replace i(t − t0 )/ → β. The advantage
of Eq. 4.57 over the matrix method of Sect. 2.2 is that the former can be implemented
even if the Hamiltonian depends on the wavefunction, as in Eq. 4.51. The only
caveat is that, while regular time propagation (Sect. 4.1.9) is unitary, imaginary-time
propagation is not. The wavefunction must therefore be re-normalized after each
imaginary-time evolution step (with the Normalize function).
To implement this method of calculating the ground state by imaginary-time
propagation, we set β = M · δβ and modify Eq. 4.57 to

 M
Trotter Eq. 4.39

 δβ M
V̂ −δβ T̂ − δβ
|γ ∝ lim e−M δβ Ĥ |ψ = lim e−δβ Ĥ |ψ = lim lim e− 2 e e 2 V̂ |ψ.
M·δβ→∞ M·δβ→∞ δβ→0 M·δβ→∞
(4.58)
In practice we choose a small but finite “imaginary-time” step δβ, and keep multi-
δβ δβ
plying the wavefunction by e− 2 V̂ e−δβ T̂ e− 2 V̂ until the normalized wavefunction no
longer changes and the infinite-β limit (M · δβ → ∞) has effectively been reached.
groundstate[g_?NumericQ, \[Delta]\[Beta]_?NumericQ, tolerance_: 10ˆ-10] :=
Module[{Ke, propKin, propPot2, v0, \[Gamma]},
(* compute the diagonal elements of exp[-\[Delta]\[Beta]*T] *)
Ke = Exp[-\[Delta]\[Beta]*Range[nmax]ˆ2*\[Pi]ˆ2/2] //N;
(* propagate by a full imaginary-time-step with T *)
propKin[v_] := Normalize[FourierDST[Ke*FourierDST[v,1],1]];
(* propagate by a half imaginary-time-step with V *)
propPot2[v_] := Normalize[Exp[-\[Delta]\[Beta]/2*(Wgrid + g*(nmax+1)*Abs[v]ˆ2)]*v];
(* propagate by a full imaginary-time-step by *)
(* H=T+V using the Trotter approximation *)
prop[v_] := propPot2[propKin[propPot2[v]]];
(* random starting point *)
v0 = Normalize@RandomComplex[{-1-I,1+I}, nmax];
(* propagation to the ground state *)
\[Gamma] = FixedPoint[prop,v0,SameTest->Function[{v1,v2},Norm[v1-v2]<tolerance]];
(* return the ground-state coefficients *)
\[Gamma]]

The last argument, tolerance, is optional and is given the default value 10−10 if not
specified (see Sect. 1.6.5). The FixedPoint function is used to apply the imaginary-
time propagation until the result no longer changes (two consecutive results are
considered equal if the function given as SameTest returns true when applied to
these two results).
Multiplying Eq. 4.51 by ψ ∗ (x) and integrating over x gives
4.2 Many Particles in One Dimension: Dynamics … 121
 ∞  ∞  ∞
2 ∗ 
μ=− dxψ (x)ψ (x) + dx V (x)|ψ(x)| + g 2
dx|ψ(x)|4 , (4.59)
2m −∞
    −∞    −∞
 
E kin [ψ] E pot [ψ] 2E int [ψ]

which is very similar to Eq. 4.49 apart from a factor of two for E int . We use this
to calculate the total energy and the chemical potential in In[502] by replacing
lines 16ff with
(* energy components *)
Ekin = \[Pi]ˆ2/2*Range[nmax]ˆ2.Abs[FourierDST[\[Gamma],1]]ˆ2;
Epot = Wgrid.Abs[\[Gamma]]ˆ2;
Eint = (g/2)(nmax+1)*Total[Abs[\[Gamma]]ˆ4];
(* total energy *)
Etot = Ekin + Epot + Eint;
(* chemical potential *)
\[Mu] = Ekin + Epot + 2*Eint;
(* return energy, chemical potential, coefficients *)
{Etot, \[Mu], \[Gamma]}]

and adding the local variables Ekin, Epot, Eint, Etot, and \[Mu] on line 2.
As an example we calculate the ground-state density for the gravity well of Sect.
4.1.7 with three different values of the interaction strength κ [in units of (g4 /m)1/3 ]:
With[{\[Kappa] = 3 * (g*\[HBar]ˆ4/m)ˆ(1/3), \[Delta]\[Beta] = 10ˆ-4},
{Etot, \[Mu], \[Gamma]} = groundstate[\[Delta]\[Beta], \[Kappa]];
ListLinePlot[Join[{{0, 0}}, Transpose[{xgrid,Abs[\[Gamma]]ˆ2/\[CapitalDelta]}], {{a, 0}}],
PlotRange -> All, PlotLabel -> {Etot, \[Mu]}]]

Note that for κ = 0 the Gross–Pitaevskii equation is the Schrödinger equation,


and the chemical potential is equal to the total energy, matching the exact result of
Out[450].
Exercises
Q4.12 Dimensionless problem (a=m=\[HBar]=1): Given a particle moving in the range
x ∈ [0, 1] with the non-linear Hamiltonian
 
1 d2 1 2
Ĥ = − + 2500 x − + κ|ψ(x)|2 , (4.60)
2 dx 2 2

do the following calculations:


122 4 Quantum Motion in Real Space

1. For κ = 0 calculate the exact ground state |ζ (assuming that the particle can
move in the whole domain x ∈ R) and its energy eigenvalue. Hint: assume
1
2  # √
x− 2
ζ(x) = exp − 2σ / σ 2π and find the value of σ that minimizes

ζ|Ĥ|ζ.
2. Calculate the ground state limβ→∞ e−β Ĥ |ζ and its chemical potential by
imaginary-time propagation (with normalization of the wavefunction after
each propagation step), using the code given above.
3. Plot the ground-state density for different values of κ.
4. Plot the total energy and the chemical potential as functions of κ.

4.3 Several Particles in One Dimension: Interactions

In Sect. 4.2 we have studied a simple mean-field description of many-particle systems,


with the advantage of simplicity and the disadvantage of not describing inter-particle
correlations. Here we use a different approach that captures the full quantum mechan-
ics of many-particle systems (including correlations), with the disadvantage of much
increased calculation size.
We have seen in Sect. 2.4.2 how to describe quantum-mechanical systems with
more than one degree of freedom. This method can be used for describing several
particles moving in one dimension. In the following we look at two examples of
interacting particles.
When more than one particle is present in a system, we must distinguish
between bosons and fermions. Whenever the Hamiltonian is symmetric under parti-
cle exchange (which is the case in this section), each one of its eigenstates can be
associated with an irreducible representation of the particle permutation group. For
two particles, the only available choices are the symmetric and the antisymmetric irre-
ducible representations, and therefore every numerically calculated eigenstate can
be labeled as either bosonic (symmetric) or fermionic (antisymmetric). For more par-
ticles, however, other irreducible representations exist,14 meaning that some numer-
ically calculated eigenstates of the Hamiltonian may not be physical at all because
they are neither bosonic (fully symmetric) nor fermionic (fully antisymmetric).

4.3.1 Two Identical Particles in One Dimension with Contact


Interaction [Supplementary Material 5]

We first look at two identical particles moving in a one-dimensional square well


of width a and interacting through a contact potential Vint (x1 , x2 ) = κ × δ(x1 − x2 ).

14 See https://en.wikipedia.org/wiki/Irreducible_representation.
4.3 Several Particles in One Dimension: Interactions 123

Such potentials are a good approximation of the s-wave scattering interactions taking
place in cold dilute gases. The Hamiltonian of this system is15
 
2 ∂2 ∂2
Ĥ = − + 2 + V (x1 ) + V (x2 ) + κδ(x1 − x2 ), (4.61)
2m ∂x12 ∂x2      
   V̂ Ĥint

where V (x) is the single-particle potential (as in Sect. 4.1) and κ is the interaction
strength, often related to the s-wave scattering length as . For the time being we do
not need to specify whether the particles are bosons or fermions.
We describe this system with the tensor-product basis constructed from two finite-
resolution position basis sets:

| j1 , j2  = | j1  ⊗ | j2  for j1 , j2 ∈ {1, 2, 3, . . . , n max }. (4.62)

Most of the matrix representations of the terms in Eq. 4.61 are constructed as tensor
products of the matrix representations of the corresponding single-particle repre-
sentations since T̂ = T̂1 ⊗ 1 + 1 ⊗ T̂2 and V̂ = V̂1 ⊗ 1 + 1 ⊗ V̂2 . The only new
element is the interaction Hamiltonian Ĥint . Remembering that its formal operator
definition is
 ∞  
Ĥint = κ dx1 dx2 |x1  ⊗ |x2  δ(x1 − x2 ) x1 | ⊗ x2 | (4.63)
−∞

(while Eq. 4.61 is merely a shorthand notation), we calculate its matrix elements in
the finite-precision position basis with
 ∞  a
 j1 , j2 |Ĥint | j1 , j2  = κ dx1 dx2  j1 |x1  j2 |x2 δ(x1 − x2 )x1 | j1 x2 | j2  = κ dxϑ j1 (x)ϑ j2 (x)ϑ j  (x)ϑ j  (x).
−∞ 0 1 2
(4.64)

These quartic overlap integrals can be calculated by a four-dimensional type-I discrete


sine transform (see Eq. 4.20 and Q4.13),
 a
dxϑ j1 (x)ϑ j2 (x)ϑ j3 (x)ϑ j4 (x) =
0

1
n
max 
X n 1 j1 X n 2 j2 X n 3 j3 X n 4 j4 δn 1 +n 2 ,n 3 +n 4 + δn 1 +n 3 ,n 2 +n 4 + δn 1 +n 4 ,n 2 +n 3
2a
n 1 ,n 2 ,n 3 ,n 4 =1

− δn 1 ,n 2 +n 3 +n 4 − δn 2 ,n 1 +n 3 +n 4 − δn 3 ,n 1 +n 2 +n 4 − δn 4 ,n 1 +n 2 +n 3 ,
(4.65)

15 Notice that we write this Hamiltonian in an abbreviated form. The full operator form, with terms
similar to Eq. 4.2 but containing double integrals over space, is cumbersome to write (see Eq. 4.63).
124 4 Quantum Motion in Real Space

which we evaluate in Mathematica very efficiently and all at once with


overlap4 = FourierDST[Table[KroneckerDelta[n1+n2,n3+n4]
+KroneckerDelta[n1+n3,n2+n4]+KroneckerDelta[n1+n4,n2+n3]
-KroneckerDelta[n1,n2+n3+n4]-KroneckerDelta[n2,n1+n3+n4]
-KroneckerDelta[n3,n1+n2+n4]-KroneckerDelta[n4,n1+n2+n3],
{n1,nmax},{n2,nmax},{n3,nmax},{n4,nmax}],1]/(2*a);

Mathematica code As before, we assume that the quantities a, m, and  are


expressed in a suitable set of units (see Sect. 4.1.1). First we define the grid size and
the unit operator id acting on a single particle:
m = \[HBar] = 1; (* for example *)
a = 1; (* for example *)
nmax = 50; (* for example *)
\[CapitalDelta] = a/(nmax+1);
xgrid = Range[nmax]*\[CapitalDelta];
id = IdentityMatrix[nmax, SparseArray];

The total kinetic Hamiltonian is assembled via a Kronecker product (tensor product)
of the two single-particle kinetic Hamiltonians:
T1M = SparseArray[Band[{1,1}]->Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];
T1P = FourierDST[T1M, 1];
TP = KroneckerProduct[T1P, id] + KroneckerProduct[id, T1P];

The same for the potential Hamiltonian (here we assume no potential, that is, a square
well; but you may modify this):
W[x_] = 0;
Wgrid = W /@ xgrid;
V1P = SparseArray[Band[{1,1}]->Wgrid];
VP = KroneckerProduct[V1P, id] + KroneckerProduct[id, V1P];

The interaction Hamiltonian is constructed from In[504] with the


ArrayFlatten command, which flattens the combination basis set | j1  ⊗ | j2 
into a single basis set | j1 , j2 , or in other words, which converts the n max × n max ×
n max × n max -matrix overlap4 into a n 2max × n 2max -matrix:
HintP = ArrayFlatten[overlap4];

The full Hamiltonian, in which the amplitude of the potential can be adjusted with
the prefactor  and the interaction strength with g, is
HP[\[CapitalOmega]_, \[Kappa]_] = TP + \[CapitalOmega]*VP + \[Kappa]*HintP;

We calculate eigenstates (the ground state, for example) with the methods already
described previously. The resulting wavefunctions are in the tensor-product basis of
Eq. 4.62, and they can be plotted with
4.3 Several Particles in One Dimension: Interactions 125

plot2Dwf[\[Psi]_] := Module[{\[Psi]1,\[Psi]2},
(* make a square array of wavefunction values *)
\[Psi]1 = ArrayReshape[\[Psi], {nmax,nmax}];
(* add a frame of zeros at the edges *)
(* representing the boundary conditions *)
\[Psi]2 = ArrayPad[\[Psi]1, 1];
(* plot *)
ArrayPlot[Reverse[Transpose[\[Psi]2]]]

Assuming that a given wavefunction \[Psi] is purely real-valued,16 we can plot it with
plot2Dwf[v/\[CapitalDelta]]

Here we plot the four lowest-energy wavefunctions for  = 0 (no potential, the
particles move in a simple infinite square well) and κ = +25 (repulsive interaction),
using n max = 50 grid points, with the title of each panel showing the energy and
the symmetry (see below). White corresponds to zero wavefunction, red is positive
ψ(x1 , x2 ) > 0, and blue is negative ψ(x1 , x2 ) < 0.

We can see that in the ground state for g > 0 the particles avoid each other, i.e.,
the ground-state wavefunction ψ(x1 , x2 ) is reduced whenever x1 = x2 .
And here are the lowest four energy eigenstate wavefunctions for κ = −10:

We can see that in the ground state for κ < 0 the particles attract each other, i.e.,
the ground-state wavefunction ψ(x1 , x2 ) is increased whenever x1 = x2 . We also
notice that the second-lowest state for κ = +25 is exactly equal to the fourth-lowest

16 Theeigenvectors of Hermitian operators can always be chosen to have real coefficients. Proof:
Suppose that H · ψ = Eψ  for a vector ψ
 with complex entries. Complex-conjugate the eigenvalue
∗ ∗ ∗
 
equation, H · ψ = E ψ ; but H = H and E ∗ = E, and hence ψ
† ∗ †  is also an eigenvector of

H with eigenvalue E. Thus we can introduce two real-valued vectors ψ r = ψ +ψ  and ψ i =

  
i(ψ − ψ ), representing the real and imaginary parts of ψ, respectively, which are both eigenvectors
of H with eigenvalue E. Mathematica (as well as most other matrix diagonalization algorithms)
automatically detect Hermitian matrices and return eigenvectors with real coefficients.
126 4 Quantum Motion in Real Space

state for κ = −10: its wavefunction vanishes whenever x1 = x2 and thus the contact
interaction has no influence on this state.
In the above plots we have noted the symmetry of each eigenstate (symmetric
or antisymmetric with respect to particle exchange), which is calculated with the
integral
 a 
+1 for symmetric statesψ(x2 , x1 ) = ψ(x1 , x2 ),
S[ψ] = dx1 dx2 ψ ∗ (x1 , x2 )ψ(x2 , x1 ) =
0 −1 for antisymmetric statesψ(x2 , x1 ) = −ψ(x1 , x2 ).
(4.66)

In Mathematica, the mirrored wavefunction ψ(x2 , x1 ) is calculated with the par-


ˆ defined as
ticle interchange operator 
\[CapitalXi] = ArrayFlatten[SparseArray[{i_,j_,j_,i_} -> 1, {nmax, nmax, nmax, nmax}]];

ˆ
such that ψ(x2 , x1 ) = x2 , x1 |ψ = x1 , x2 ||ψ. The symmetry of a state, defined
in Eq. 4.66, is therefore the expectation value of the  ˆ operator:

symmetry[v_] := Re[Conjugate[v].(\[CapitalXi].v)]

Here we show the numerical energy eigenvalues of the contact interaction Hamil-
tonian, colored according to their symmetry: red dots indicate symmetric states
(S = +1), whereas blue dots indicate antisymmetric states (S = −1).

In this representation it becomes even clearer that antisymmetric states are inde-
pendent of the contact interaction because their wavefunction vanishes whenever
x1 = x2 (see Q4.16).
Bosons and Fermions
The reason why every state in the above calculation is either symmetric or anti-
symmetric with respect to particle interchange is that the Hamiltonian In[519]
commutes with the particle interchange operator In[522] (see Q4.15). As a result,
ˆ can be diagonalized simultaneously.
Ĥ and 
ˆ has only eigenvalues ±1:
We notice that 
4.3 Several Particles in One Dimension: Interactions 127

\[CapitalXi] //Eigenvalues //Counts


<| -1 -> 1225, 1 -> 1275 |>

The n max (n max + 1)/2 eigenvalues +1 correspond to eigenvectors that are symmetric
under particle interchange and form a basis of the symmetric subspace of the full
Hilbert space (bosonic states); the n max (n max − 1)/2 eigenvalues −1 correspond to
eigenvectors that are antisymmetric under particle interchange and form a basis of the
antisymmetric subspace of the full Hilbert space (fermionic states). By constructing
a matrix whose rows are the symmetric eigenvectors, we construct an operator  ˆs
that projects from the full Hilbert space onto the space of symmetric states,
\[Epsilon] = Transpose[Eigensystem[Normal[\[CapitalXi]]]];
\[CapitalPi]s = Select[\[Epsilon], #[[1]] == 1 &][[All, 2]] //Orthogonalize //SparseArray;

ˆ a onto the space of antisymmetric states,


Similarly we construct a projector 
\[CapitalPi]a = Select[\[Epsilon], #[[1]] == -1 &][[All, 2]] //Orthogonalize //SparseArray;

With the help of these projectors, we define the Hamiltonians of the system restricted
to the symmetric or antisymmetric subspace, respectively:
HPs[\[CapitalOmega]_, \[Kappa]_] = \[CapitalPi]s.HP[\[CapitalOmega],\[Kappa]].Transpose[\[CapitalPi]s];
HPa[\[CapitalOmega]_, \[Kappa]_] = \[CapitalPi]a.HP[\[CapitalOmega],\[Kappa]].Transpose[\[CapitalPi]a];

If the two particles in the present problem are indistinguishable bosons, then they can
only populate the symmetric states (red dots in the above eigenvalue plot). We calcu-
late the m lowest energy eigenstates of this symmetric subspace with the restricted
Hamiltonian HPs:
Clear[sgs];
sgs[\[CapitalOmega]_?NumericQ, \[Kappa]_?NumericQ, m_Integer /; m >= 1] := sgs[\[CapitalOmega], \[Kappa], m] =
{-#[[1]], #[[2]].\[CapitalPi]s} &[Eigensystem[-HPs[N[\[CapitalOmega]], N[\[Kappa]]], m,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}]]

Notice that we convert the calculated eigenstates back into the full Hilbert space by
multiplying the results with \[CapitalPi]s from the right.
In the same way, if the two particles in the present problem are indistinguishable
fermions, then they can only populate the antisymmetric states (blue dots in the above
eigenvalue plot). We calculate the m lowest energy eigenstates of this antisymmetric
subspace with the restricted Hamiltonian HPa:
Clear[ags];
ags[\[CapitalOmega]_?NumericQ, \[Kappa]_?NumericQ, m_Integer /; m >= 1] := ags[\[CapitalOmega], \[Kappa], m] =
{-#[[1]], #[[2]].\[CapitalPi]a} &[Eigensystem[-HPa[N[\[CapitalOmega]], N[\[Kappa]]], m,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}]]

As an example, here we calculate the six lowest energy eigenvalues of the full Hamil-
tonian for  = 0 and κ = 5:
128 4 Quantum Motion in Real Space

gs[0, 5, 6][[1]] //Sort


{15.2691, 24.674, 32.3863, 45.4849, 49.348, 58.1333}

The six lowest symmetric energy eigenvalues are


sgs[0, 5, 6][[1]] //Sort
{15.2691, 32.3863, 45.4849, 58.1333, 72.1818, 93.1942}

The six lowest antisymmetric energy eigenvalues are


ags[0, 5, 6][[1]] //Sort
{24.674, 49.348, 64.1524, 83.8916, 98.696, 123.37}

From Out[535] and Out[536] we can see which levels of Out[534] are
symmetric or antisymmetric.
Exercises
Q4.13 Show that Eq. 4.65 is plausible by setting nmax=3, evaluating In[504],
and then comparing its values to explicit integrals from Eq. 4.65 for several
tuples ( j1 , j2 , j3 , j4 ). Hint: use a=1 for simplicity.
Q4.14 In the problem of Sect. 4.3.1, calculate the expectation value of the inter-
particle distance x1 − x2 , and its variance (x1 − x2 )2  − x1 − x2 2 , in the
ground state as a function of κ (still keeping  = 0). Hint: Using Eq. 4.24,
the position operators x1 and x2 are approximately
x = SparseArray[Band[{1,1}]->xgrid];
x1 = KroneckerProduct[x, id];
x2 = KroneckerProduct[id, x];

Q4.15 Show in Mathematica (by explicit calculation) that the Hamiltonian In[519]
commutes with the particle interchange operator In[522]. Hint: use the
Normfunction to calculate the matrix norm of the commutator.
Q4.16 Show in Mathematica (by explicit calculation) that the antisymmetric Hamil-
tonian In[529] does not depend on κ.
Q4.17 The contact-interaction problem of this section can be solved analytically if
W (x) = 0, which allows us to check the accuracy of the presented numerical
calculations.
 We will study the dimensionless (a=m=\[HBar]=1) Hamiltonian Ĥ =
1 ∂2 ∂2
− 2 ∂x 2 + ∂x 2 + κδ(x1 − x2 ).
1 2

1. The ground-state wavefunction will be of the form




⎪cos[α(x1 + x2 − 1)] cos[β(x1 − x2 + 1)]

⎨ − cos[α(x − x + 1)] cos[β(x + x − 1)] if0 ≤ x ≤ x ≤ 1,
1 2 1 2 1 2
ψ(x1 , x2 ) = A ×

⎪cos[α(x2 + x1 − 1)] cos[β(x2 − x1 + 1)]


− cos[α(x2 − x1 + 1)] cos[β(x2 + x1 − 1)] if0 ≤ x2 ≤ x1 ≤ 1.
(4.67)
4.3 Several Particles in One Dimension: Interactions 129

Check that this wavefunction satisfies the boundary conditions ψ(x1 , 0) =


ψ(x1 , 1) = ψ(0, x2 ) = ψ(1, x2 ) = 0, that it is continuous across the bound-
ary x1 = x2 (i.e., that the two pieces of the wavefunction match up), and that
it satisfies the symmetries of the calculation box: ψ(x1 , x2 ) = ψ(x2 , x1 ) =
ψ(1 − x1 , 1 − x2 ).
2. Insert this wavefunction into the time-independent Schrödinger equation.
Find the energy eigenvalue by assuming x1 = x2 . You should find E =
α2 + β 2 .
3. Express the Hamiltonian √ and the wavefunction √ in terms ∂of2 the ∂new coordi-
2
∂2
nates R = (x1 + x2 )/ 2 and r = (x1 − x2 )/ 2. Hints: ∂x 2 + ∂x 2
= ∂R 2
+
1 2
∂2
and δ(ux) = u −1 δ(x).
∂r 2
4. Integrate the Schrödinger equation, expressed in the (R, r ) coordinates, over
r ∈ [−, ] and take the limit  → 0+ . Verify that the resulting expression is
satisfied if α tan(α) = β tan(β) = κ/2. Hint: Do the integration analytically
b
and use a dr ∂r∂ 2 f (r ) = f  (b) − f  (a).
2

5. The ground state is found by numerically solving α tan(α) = β tan(β) =


κ/2. Out of the many solutions of these equations, we choose the correct
ones for the ground state by specifying the starting point of the numerical
root solver:
Clear[a,b];
a[-\[Infinity]] = \[Pi]/2;
a[0] = \[Pi];
a[\[Infinity]] = 3\[Pi]/2;
a[\[Kappa]_?NumericQ] := a[\[Kappa]] = u /.
FindRoot[u*Tan[u]==\[Kappa]/2, {u,\[Pi]+ArcTan[\[Kappa]/(2\[Pi])]}]
b[-\[Infinity]] = I*\[Infinity];
b[0] = 0;
b[\[Infinity]] = \[Pi]/2;
b[\[Kappa]_ /; \[Kappa] >= 0] := b[\[Kappa]] = u /. FindRoot[u*Tan[u] == \[Kappa]/2,
{u, If[\[Kappa]<\[Pi], 1, \[Pi]/2 - \[Pi]/\[Kappa] + 2\[Pi]/\[Kappa]ˆ2]}]
b[\[Kappa]_ /; \[Kappa] < 0] := b[\[Kappa]] = I*u /. FindRoot[u*Tanh[u] == -\[Kappa]/2, {u,-\[Kappa]/2}]

Compare the resulting κ-dependent ground state energy to the numerically


calculated ground-state energies from In[519].

4.3.2 Two Particles in One Dimension with Arbitrary


Interaction

Two particles in one dimension interacting via an arbitrary potential have a Hamilto-
nian very similar to Eq. 4.61, except that the interaction is now

Ĥint = Vint (x1 , x2 ), (4.68)

or, more explicitly as an operator in the Dirac position basis,


 a
Ĥint = dx1 dx2 |x1  ⊗ |x2 Vint (x1 , x2 )x1 | ⊗ x2 |. (4.69)
0
130 4 Quantum Motion in Real Space

As an example, for the Coulomb interaction we have Vint (x1 , x2 ) = 4πQ0 |x1 Q1 −x
2
2|
with
Q 1 and Q 2 the electric charges of the two particles. For many realistic potentials Vint
only depends on |x1 − x2 |.
In the finite-resolution position basis, the matrix elements of this interaction Hamil-
tonian can be approximated with a method similar to what we have already seen, for
example in Sect. 4.1.4:
 a
 j1 , j2 |Ĥint | j1 , j2  = ϑ j1 (x1 )ϑ j2 (x2 )Vint (x1 , x2 )ϑ j1 (x1 )ϑ j2 (x2 )dx1 dx2
0
 a
≈ Vint (x j1 , x j2 ) ϑ j1 (x1 )ϑ j2 (x2 )ϑ j1 (x1 )ϑ j2 (x2 )dx1 dx2 = δ j1 , j1 δ j2 , j2 Vint (x j1 , x j2 ).
0
(4.70)

This approximation is easy to evaluate without the need for integration over basis
functions. But realistic interaction potentials are usually singular for x1 = x2 (con-
sider, for example, the Coulomb potential), and therefore the approximate Eq. 4.70
fails for the evaluation of the matrix elements  j, j|Ĥint | j, j. This problem cannot
be solved in all generality, and we can either resort to more accurate integration (as
in Sect. 4.3.1) or we can replace the true interaction potential with a less singular
version: for the Coulomb potential, we could for example use a truncated singularity
for |x| < δ for some small distance δ:

Q1 Q2 1
|x|
if |x| ≥ δ
Vint (x) = × (4.71)
4π0 1
δ
if |x| < δ

As long as the particles move at energies much smaller than Vint (±δ) = Q1 Q2
4π0 δ
they
cannot distinguish the true Coulomb potential from this truncated form.
Exercises
Q4.18 Consider two indistinguishable bosons in an infinite square well, interacting
via the truncated Coulomb potential of Eq. 4.71. Calculate the expectation
value of the inter-particle distance, x1 − x2 , and its variance, (x1 − x2 )2  −
x1 − x2 2 , in the ground state as a function of the Coulomb interaction
strength (attractive and repulsive). Hint: set δ =  = a/(n max + 1) in Eq.
4.71.
Q4.19 Answer Q4.18 for two indistinguishable fermions.

4.4 One Particle in Several Dimensions [Supplementary


Material 6]

An important application of the imaginary-time propagation method of Sect. 4.2.1


is the calculation of the shape of a three-dimensional Bose–Einstein condensate. In
4.4 One Particle in Several Dimensions … 131

this section we use such a calculation as an example of how to extend single-particle


lattice quantum mechanics to more spatial dimensions.
The non-linear Hamiltonian describing a three-dimensional Bose–Einstein con-
densate in a harmonic trap (to use a very common case) is
! "
2 ∂2 ∂2 ∂2 m 2 2
4π2 as
Ĥ = − + 2 + 2 + ωx x + ω 2y y 2 + ωz2 z 2 + (N − 1) |ψ(x, y, z)|2 ,
2m ∂x 2 ∂y ∂z 2 m
(4.72)

wherewe have assumed that the single-particle wavefunction ψ(x, y, z) is normal-


ized: |ψ(x, y, z)|2 dxdydz = 1. As before, the contact interaction is described by
2
the s-wave scattering length as . We will call κ = 4πm as the interaction constant, as
in previous sections.
We perform this calculation in a square box, where |x| ≤ a2 , |y| ≤ a2 , and |z| ≤ a2 ;
we will need to choose a large enough so that the BEC fits into this box, but small
enough so that we do not need an unreasonably large n max for the description of its
wavefunction. Notice that this box is shifted by a2 compared to the [0 . . . a] boxes
used so far; this does not influence the calculations in any way.
The ground state of the non-linear Hamiltonian of Eq. 4.72 can be found by three-
dimensional imaginary-time propagation, starting from (almost) any arbitrary state.
Here we assemble a Mathematica function groundstate that, given an imaginary
time step δβ, propagates a random initial state until the state is converged to the
ground state.
The units of the problem are dealt with as in Sect. 4.1.1, differing from In[409]ff
in that here we choose the length and time units freely:
LengthUnit = Quantity["Micrometers"]; (* choose freely *)
TimeUnit = Quantity["Seconds"]; (* choose freely *)
MassUnit = Quantity["ReducedPlanckConstant"]*TimeUnit/LengthUnitˆ2;
EnergyUnit = Quantity["ReducedPlanckConstant"]/TimeUnit;
\[HBar] = Quantity["ReducedPlanckConstant"]/(EnergyUnit*TimeUnit);

We will be considering N = 1000 87 Rb atoms in a magnetic trap with trap frequen-


cies ωx = 2π × 115H z and ω y = ωz = 2π × 540H z. The 87 Rb atoms are assumed
to be in the |F = 1, M F = −1 hyperfine ground state, where their s-wave scattering
length is as = 100.4a0 (with a0 = 52.9177pm the Bohr radius).
m = Quantity[86.909187, "AtomicMassUnit"]/MassUnit;
a = Quantity[10, "Micrometers"]/LengthUnit;
\[Omega]x = 2*\[Pi]*Quantity[115, "Hertz"]*TimeUnit;
\[Omega]y = 2*\[Pi]*Quantity[540, "Hertz"]*TimeUnit;
\[Omega]z = 2*\[Pi]*Quantity[540, "Hertz"]*TimeUnit;
as = Quantity[100.4, "BohrRadius"]/LengthUnit;
\[Kappa] = 4*\[Pi]*\[HBar]ˆ2*as/m;

Next we define the grid on which the calculations will be done. In each Cartesian direc-
tion there are n max grid points x j = xgrid[[j]] on the interval [−a/2, +a/2]:
nmax = 50;
\[CapitalDelta] = a/(nmax+1);
xgrid = a*(Range[nmax]/(nmax+1) - 1/2);
132 4 Quantum Motion in Real Space

We define the dimensionless harmonic-trap potential: the potential has its minimum
at the center of the calculation box, i.e., at x = y = z = 0.
W[x_,y_,z_] = m/2 * (\[Omega]xˆ2*xˆ2 + \[Omega]yˆ2*yˆ2 + \[Omega]zˆ2*zˆ2);

We only need the values of this potential on the grid points. To evaluate this, we build
a three-dimensional array whose element Wgrid[[jx,jy,jz]] is given by the
grid-point value W[xgrid[[jx]],xgrid[[jy]],xgrid[[jz]]]:
Wgrid=Table[W[xgrid[[jx]],xgrid[[jy]],xgrid[[jz]]],{jx,nmax},{jy,nmax},{jz,nmax}];

We could also define this more efficiently through functional programming:


Wgrid = Outer[W, xgrid, xgrid, xgrid];

The structure of the three-dimensional Wgrid array of potential values mirrors the
structure of the wavefunction that we will be using: any wavefunction v will be a
n max × n max × n max array of coefficients in our finite-resolution position basis:


n max
ψ(x, y, z) = v[[jx,jy,jz]]ϑ jx (x)ϑ jy (y)ϑ jz (z). (4.73)
jx , j y , jz =1

From Eq. 4.13 we find that on the three-dimensional grid points the wavefunction
takes the values

ψ(x jx , x jy , x jz ) = v[[jx,jy,jz]]/3/2 . (4.74)

The norm of a wavefunction is


 ∞ n
max
|ψ(x, y, z)|2 dxdydz = |v[[jx,jy,jz]]|2 = Norm[Flatten[v]]ˆ2,
−∞ jx , j y , jz =1
(4.75)
from which we define a wavefunction normalization function
nn[v_] := v/Norm[Flatten[v]]

The ground state calculation then proceeds by imaginary-time propagation, with step
size δβ corresponding to an evolution e−δβ Ĥ per step. The calculation is done for N =
n particles. Remember that the FourierDST function can do multi-dimensional
discrete sine transforms, and therefore the kinetic-energy propagator can still be
evaluated very efficiently. The last argument, tolerance, is optional and is given
the value 10−6 if not specified (see Sect. 1.6.5).
4.4 One Particle in Several Dimensions … 133

groundstate[n_?NumericQ, \[Delta]\[Beta]_?NumericQ, tolerance_:10ˆ(-6)] :=


Module[{Kn, Ke, propKin, propPot2, prop, v0, \[Gamma], Ekin, Epot, Eint, Etot, \[Mu]},
(* compute the diagonal elements of exp[-\[Delta]\[Beta]*T] *)
Kn = \[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)*Table[nxˆ2+nyˆ2+nzˆ2,
{nx,nmax}, {ny,nmax}, {nz,nmax}];
Ke = Exp[-\[Delta]\[Beta]*Kn] //N;
(* propagate by a full imaginary-time-step with T *)
propKin[v_] := nn[FourierDST[Ke*FourierDST[v, 1], 1]];
(* propagate by a half imaginary-time-step with V *)
propPot2[v_] := nn[Exp[-(\[Delta]\[Beta]/2)*(Wgrid+\[Kappa]*(n-1)*Abs[v]ˆ2/\[CapitalDelta]ˆ3)]*v];
(* propagate by a full imaginary-time-step by *)
(* H=T+V using the Trotter approximation *)
prop[v_] := propPot2[propKin[propPot2[v]]]
(* random starting point *)
v0 = nn @ RandomVariate[NormalDistribution[], {nmax, nmax, nmax}];
(* propagation to the ground state *)
\[Gamma] = FixedPoint[prop, v0,
SameTest -> Function[{v1,v2}, Norm[Flatten[v1-v2]]<tolerance]];
(* energy components *)
Ekin = Flatten[Kn].Flatten[Abs[FourierDST[\[Gamma], 1]]ˆ2];
Epot = Flatten[Wgrid].Flatten[Abs[\[Gamma]]ˆ2];
Eint = (\[Kappa]/2)*(n-1)*Total[Flatten[Abs[\[Gamma]]ˆ4]]/\[CapitalDelta]ˆ3;
(* total energy *)
Etot = Ekin + Epot + Eint;
(* chemical potential *)
\[Mu] = Ekin + Epot + 2*Eint;
(* return energy, chemical potential, coefficients *)
{Etot, \[Mu], \[Gamma]}]

As an example, we calculate the ground state for N = 1000 atoms and a time step
of δβ = 10−5 time units, using the default convergence tolerance:
{Etot, \[Mu], \[Gamma]} = groundstate[1000, 10ˆ(-4)];
{Etot, \[Mu]} * UnitConvert[EnergyUnit, "Joules"]
{6.88125*10ˆ-31 J, 8.68181*10ˆ-31 J}

A more common energy unit is the Hertz, arrived at via Planck’s constant:
{Etot, \[Mu]} * UnitConvert[EnergyUnit/Quantity["PlanckConstant"], "Hertz"]
{1038.51 Hz, 1310.25 Hz}

One way of plotting the ground-state density in 3D is as an iso-density surface.


We plot the surface at half the peak density with
\[Rho] = Abs[\[Gamma]]ˆ2/\[CapitalDelta]ˆ3;
ListContourPlot3D[\[Rho],
DataRange -> a*(1/(nmax+1)-1/2)*{{-1,1},{-1,1},{-1,1}},
Contours -> {Max[\[Rho]]/2}, BoxRatios -> Automatic]

Here we show several such iso-density surfaces:


134 4 Quantum Motion in Real Space

For more quantitative results we can, for example, calculate the expectation val-
ues X = x, Y = y, Z = z, XX = x 2 , YY = y 2 , ZZ = z 2 . We could define
coordinate arrays as
xc = Table[xgrid[[jx]], {jx,nmax}, {jy,nmax}, {jz,nmax}];
yc = Table[xgrid[[jy]], {jx,nmax}, {jy,nmax}, {jz,nmax}];
zc = Table[xgrid[[jz]], {jx,nmax}, {jy,nmax}, {jz,nmax}];

but we define them more efficiently as follows:


ones = ConstantArray[1, nmax];
xc = Outer[Times, xgrid, ones, ones];
yc = Outer[Times, ones, xgrid, ones];
zc = Outer[Times, ones, ones, xgrid];

The desired expectation values are then computed with


X = Total[Flatten[xc * \[Rho]]];
Y = Total[Flatten[yc * \[Rho]]];
Z = Total[Flatten[zc * \[Rho]]];
XX = Total[Flatten[xcˆ2 * \[Rho]]];
YY = Total[Flatten[ycˆ2 * \[Rho]]];
ZZ = Total[Flatten[zcˆ2 * \[Rho]]];

The root-mean-square size of the BEC is calculated from these as the standard devi-
ations of the position operators in the three Cartesian directions:
{Sqrt[XX-Xˆ2], Sqrt[YY-Yˆ2], Sqrt[ZZ-Zˆ2]} * LengthUnit
{1.58829 \[Mu]m, 0.417615 \[Mu]m, 0.417615 \[Mu]m}

4.4.1 Exercises

Q4.20 Take the BEC Hamiltonian of Eq. 4.72 in the absence of interactions (as = 0)
and calculate analytically the expectation values x 2 , y 2 , z 2  in the ground
state.
4.4 One Particle in Several Dimensions … 135

Q4.21 Take the BEC Hamiltonian of Eq. 4.72 in the limit of strong interactions
(Thomas–Fermi limit), where the kinetic energy can be neglected. The Gross–
Pitaevskii equation is then
 
m 2 2
4π2 as
ωx x + ω 2y y 2 + ωz2 z 2 + (N − 1) |ψ(x, y, z)|2 ψ(x, y, z) = μψ(x, y, z),
2 m
(4.76)
which has two solutions:

⎨0 or
|ψ(x, y, z)|2 = μ− m2 (ωx2 x 2 +ω 2y y 2 +ωz2 z 2 ) (4.77)
⎩ 2 .
(N −1) 4πm as

Together with the conditions


 that |ψ(x, y, z)|2 ≥ 0, that ψ(x, y, z) should be
continuous, and that |ψ(x, y, z)|2 dxdydz = 1, this gives us the Thomas–
Fermi “inverted parabola” density
⎧ 




⎨ρ 1 − x 2 − y 2 − z 2 if x 2 + y 2 + z 2 ≤ 1,
0
|ψ(x, y, z)| =
2 Rx Ry Rz Rx Ry Rz

0 if not,
(4.78)
which is nonzero only inside an ellipsoid with Thomas–Fermi radii
 1  1
152 as (N − 1)ω y ωz 5 15κ(N − 1)ω y ωz 5
Rx = = , (4.79a)
m 2 ωx4 4πmωx4
  15   15
152 as (N − 1)ωz ωx 15κ(N − 1)ωx ωz
Ry = = , (4.79b)
m 2 ω 4y 4πmω 4y
 1  1
152 as (N − 1)ωx ω y 5 15κ(N − 1)ωx ω y 5
Rz = = . (4.79c)
m 2 ωz4 4πmωz4

The density at the origin of the ellipsoid is


  15   15
1 225m 6 ωx2 ω 2y ωz2 225m 3 ωx2 ω 2y ωz2
ρ0 = = (4.80)
8π 6 as3 (N − 1)3 512π 2 κ3 (N − 1)3

and the chemical potential is


  15
1$ %1 225 3 2
μ= 225m as (N − 1) ωx ω y ωz =
4 2 2 2 2 2 5
m κ (N − 1) ωx ω y ωz .
2 2 2 2
2 512π 2
(4.81)
Using this Thomas–Fermi density profile, calculate the expectation values
x 2 , y 2 , z 2  in the ground state of the Thomas–Fermi approximation.
136 4 Quantum Motion in Real Space

Hints: Calculate x 2  using Eq. 4.78 without substituting Equations 4.79 and
Eq. 4.80; do these substitutions only after having found the result. You can
find y 2  and z 2  by analogy, without repeating the calculation.
Q4.22 Compare the numerical expectation values x 2 , y 2 , z 2  of our Mathemat-
ica code to the analytic results of Q4.20 and Q4.21. What is the maximum
87
Rb atom number N which allows a reasonably good description (in this
specific trap) with the non-interacting solution? What is the minimum atom
number which allows a reasonably good description with the Thomas–Fermi
solution?
Chapter 5
Combining Spatial Motion and Spin

In this chapter we put together all the techniques studied so far: internal-spin degrees
of freedom (Chap. 3) and spatial (motional) degrees of freedom (Chap. 4) are com-
bined with the tensor-product formalism (Chap. 2). We arrive at a complete numerical

Electronic supplementary material The online version of this chapter


(https://doi.org/10.1007/978-981-13-7588-0_5) contains supplementary material, which is
available to authorized users.

© Springer Nature Singapore Pte Ltd. 2020 137


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0_5
138 5 Combining Spatial Motion and Spin

description of interacting spin-ful particles moving through space. To showcase these


powerful tools, we study Rashba coupling as well as the Jaynes–Cummings model.

5.1 One Particle in 1D with Spin

5.1.1 Separable Hamiltonian

The simplest problem combining a spatial and a spin degree of freedom in a mean-
ingful way consists of a single spin-1/2 particle moving in one dimension in a
state-selective potential:

2 d 2
Ĥ = − + V0 (x) + Vz (x) Ŝz , (5.1)
2m dx 2

where Ŝz = 21 σ̂z is given by the Pauli matrix. As was said before, Eq. (5.1) is a
short-hand notation of the full Hamiltonian

  ∞  ∞
2 ∞ d2
Ĥ = − dx|x 2 x| ⊗ 1 + dx|xV0 (x)x| ⊗ 1 + dx|xVz (x)x| ⊗ Ŝz ,
2m −∞ dx −∞ −∞
(5.2)
where it is more evident that the first two terms act only on the spatial part of the
wavefunction, while the third term couples the two degrees of freedom.
The Hilbert space of this particle consists of a one-dimensional degree of freedom
x, which we had described in Chap. 4 with a basis built from square-well eigenstates,
ˆ
and a spin-1/2 degree of freedom S = 1 σ ˆ described in the Dicke basis (Chap. 3).
2
This tensor-product structure of the Hilbert space allows us to simplify the matrix
elements of the Hamiltonian by factoring out the spin degree of freedom,

  ∞
2 ∞ ∗
φ, ↑|Ĥ|ψ, ↑ = − φ (x)ψ  (x)dx↑|↑ + φ∗ (x)V0 (x)ψ(x)dx↓|↓
2m −∞ −∞
 ∞
1
+ φ∗ (x)Vz (x)ψ(x)dx↑|σ̂z |↑
2 −∞
  ∞ 
2 ∞ ∗ 1 ∞ ∗
=− φ (x)ψ  (x)dx + φ∗ (x)V0 (x)ψ(x)dx + φ (x)Vz (x)ψ(x)dx
2m −∞ −∞ 2 −∞
  ∞
2 ∞ ∗
φ, ↑|Ĥ|ψ, ↓ = − φ (x)ψ  (x)dx↑|↓ + φ∗ (x)V0 (x)ψ(x)dx↑|↓
2m −∞ −∞

1 ∞ ∗
+ φ (x)Vz (x)ψ(x)dx↑|σ̂z |↓
2 −∞
=0
5.1 One Particle in 1D with Spin 139
  ∞
2 ∞ ∗
φ, ↓|Ĥ|ψ, ↑ = − φ (x)ψ  (x)dx↓|↑ + φ∗ (x)V0 (x)ψ(x)dx↓|↑
2m −∞ −∞

1 ∞ ∗
+ φ (x)Vz (x)ψ(x)dx↓|σ̂z |↑
2 −∞
=0
  ∞
2 ∞ ∗
φ, ↓|Ĥ|ψ, ↓ = − φ (x)ψ  (x)dx↓|↓ + φ∗ (x)V0 (x)ψ(x)dx↓|↓
2m −∞ −∞
 ∞
1
+ φ∗ (x)Vz (x)ψ(x)dx↓|σ̂z |↓
2 −∞
  ∞ 
2 ∞ ∗ 1 ∞ ∗
=− φ (x)ψ  (x)dx + φ∗ (x)V0 (x)ψ(x)dx − φ (x)Vz (x)ψ(x)dx.
2m −∞ −∞ 2 −∞
(5.3)

We see that this Hamiltonian does not mix states with different spin states (since
all matrix elements where the spin state differs between the left and right side are
equal to zero). We can therefore solve the two disconnected problems of finding the
particle’s behavior with spin up or with spin down, with effective Hamiltonians

2 d2 1 2 d 2 1
Ĥ↑ = − 2
+ V0 (x) + Vz (x), Ĥ↓ = − 2
+ V0 (x) − Vz (x).
2m dx 2 2m dx 2
(5.4)

These Hamiltonians now only describe the spatial degree of freedom, and the methods
of Chap. 4 can be used without further modifications.

5.1.2 Non-separable Hamiltonian

A more interesting situation arises when the Hamiltonian is not separable as in Sect.
5.1.1. Take, for example, the Hamiltonian of Eq. (5.1) in the presence of a uniform
transverse magnetic field Bx ,

2 d 2
Ĥ = − + V0 (x) + Vz (x) Ŝz + Bx Ŝx . (5.5)
2m dx 2
The interaction Hamiltonian with the magnetic field is not separable:
 ∞
1
φ, ↑|Bx Ŝx |ψ, ↑ = Bx φ∗ (x)ψ(x)dx↑|σ̂x |↑ = 0
2 −∞
 ∞  ∞
1 ∗ 1
φ, ↑|Bx Ŝx |ψ, ↓ = Bx φ (x)ψ(x)dx↑|σ̂x |↓ = Bx φ∗ (x)ψ(x)dx
2 −∞ 2 −∞
 ∞  ∞
1 ∗ 1
φ, ↓|Bx Ŝx |ψ, ↑ = Bx φ (x)ψ(x)dx↓|σ̂x |↑ = Bx φ∗ (x)ψ(x)dx
2 −∞ 2 −∞
 ∞
1
φ, ↓|Bx Ŝx |ψ, ↓ = Bx φ∗ (x)ψ(x)dx↓|σ̂x |↓ = 0. (5.6)
2 −∞
140 5 Combining Spatial Motion and Spin

Therefore we can no longer study separate Hamiltonians as in Eq. (5.4), and we must
instead study the joint system of spatial motion and spin. In what follows we study
a simple example of such a Hamiltonian, both analytically and numerically. We take
the trapping potential to be harmonic,

1
V0 (x) = mω 2 x 2 (5.7)
2
and the state-selective potential as a homogeneous force,

Vz (x) = −F x. (5.8)

Ground state for Bx = 0

For Bx = 0 we know that the ground states of the two spin sectors are the ground
states of the effective Hamiltonians of Eq. (5.4), which are Gaussians:

e−( 2σ ) e−( 2σ )
x−μ 2 x+μ 2

x|γ↑  =  √ ⊗ |↑ x|γ↓  =  √ ⊗ |↓ (5.9)


σ 2π σ 2π


with μ = F
2mω 2
and σ = 2mω
. These two ground states are degenerate, with energy
2
E = 21 ω − 8mω F
2 . In both of these ground states the spatial and spin degrees of

freedom are entangled: the particle is more likely to be detected in the |↑ state on
the right side (x > 0), and more likely to be detected in the |↓ state on the left side
(x < 0) of the trap. This results in a positive expectation value of the operator x̂ ⊗ Ŝz :

μ F
γ↑ |x̂ ⊗ Ŝz |γ↑  = γ↓ |x̂ ⊗ Ŝz |γ↓  = = . (5.10)
2 4mω 2
Perturbative ground state for Bx > 0
For small |Bx | the ground state can be described by a linear combination of the states
in Eq. (5.9). If we set
|γp  = α × |γ↑  + β × |γ↓  (5.11)

with |α|2 + |β|2 = 1, we find that the expectation value of the energy is

γp |Ĥ|γp  = |α|2 γ↑ |Ĥ|γ↑  + α∗ βγ↑ |Ĥ|γ↓  + β ∗ αγ↓ |Ĥ|γ↑  + |β|2 γ↓ |Ĥ|γ↓ 
1 F2 1 − F
2
= ω − 2
+ Bx (α∗ β + β ∗ α)e 4mω3
2 8mω 2
(5.12)
√ √
For Bx > 0 this energy is minimized for α = 1/ 2 and β = −1/ 2, and the pertur-
bative ground state is therefore the anti-symmetric combination of the states in Eq.
(5.9)
5.1 One Particle in 1D with Spin 141

e−( 2σ ) e−( 2σ )
x−μ 2 x+μ 2

x|γp  =  √ ⊗ |↑ −  √ ⊗ |↓. (5.13)


2σ 2π 2σ 2π

with energy
1 F2 1 F2
− 4mω
γp |Ĥ|γp  = ω − − B x e 3
. (5.14)
2 8mω 2 2
The energy splitting between this ground state and the first excited state,

e−( 2σ ) e−( 2σ )
x−μ 2 x+μ 2

x|
p  =  √ ⊗ |↑ +  √ ⊗ |↓. (5.15)
2σ 2π 2σ 2π
F2
is E = 
p |Ĥ|
p  − γp |Ĥ|γp  = Bx e− 4mω3 , which can be very small for large
F2
exponents 4mω 3.

Numerical Calculation of the Ground State [Supplementary Material 1]


For a numerical description of this particle we use dimensionless units such that
a = m = \[HBar]= 1; other units can be used in the same was as presented in Sect.
4.1.1. We describe the spatial degree of freedom with the finite-resolution position
basis of Sect. 4.1.2, centered at x = 0 as in Sect. 4.4:
a = m = \[HBar] = 1;
nmax = 100;
\[CapitalDelta] = a/(nmax+1);
xgrid = a*(Range[nmax]/(nmax+1)-1/2);

The operator x̂ is approximately diagonal in this representation (see Eq. (4.24)):


xop = SparseArray[Band[{1,1}] -> xgrid];

The identity operator on the spatial degree of freedom is


idx = IdentityMatrix[nmax, SparseArray];

The identity and Pauli operators for the spin degree of freedom are
ids = IdentityMatrix[2, SparseArray];
{sx,sy,sz}=Table[SparseArray[PauliMatrix[i]/2], {i,3}];

The kinetic energy operator is constructed via a discrete sine transform, as before:
TM = SparseArray[Band[{1,1}]->Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];
TP = FourierDST[TM, 1];
142 5 Combining Spatial Motion and Spin

From these we assemble the Hamiltonian, assuming that F and Bx are expressed in
matching units:
H[\[Omega]_, F_, Bx_] =
KroneckerProduct[TP, ids]
+ m*\[Omega]ˆ2/2 * KroneckerProduct[xop.xop, ids]
- F * KroneckerProduct[xop, sz]
+ Bx * KroneckerProduct[idx, sx];

We compute the ground state of this Hamiltonian with


Clear[gs];
gs[\[Omega]_?NumericQ, F_?NumericQ, Bx_?NumericQ] :=
gs[\[Omega], F, Bx] = -Eigensystem[-H[N[\[Omega]],N[F],N[Bx]], 1,
Method -> {"Arnoldi", "Criteria" -> "RealPart", MaxIterations -> 10ˆ6}]

Once a ground state |γ has been calculated, for example with
\[Gamma] = gs[100, 5000, 500][[2, 1]];

the usual problem arises of how to display and interpret the wavefunction. Instead
of studying the coefficients of \[Gamma] directly, we calculate several specific properties of
the ground state in what follows.

Operator expectation values: The mean spin direction (magnetization)  S ˆ =


{ Ŝx ,  Ŝ y ,  Ŝz } is calculated directly from the ground-state coefficients list
with
mx = Re[Conjugate[\[Gamma]].(KroneckerProduct[idx,sx].\[Gamma])];
my = Re[Conjugate[\[Gamma]].(KroneckerProduct[idx,sy].\[Gamma])];
mz = Re[Conjugate[\[Gamma]].(KroneckerProduct[idx,sz].\[Gamma])];
{mx,my,mz}
{-0.233037, 0., -2.08318*10ˆ-12}

The mean position x̂ and its standard deviation are calculated with
X = Re[Conjugate[\[Gamma]].(KroneckerProduct[xop,ids].\[Gamma])];
XX = Re[Conjugate[\[Gamma]].(KroneckerProduct[xop.xop,ids].\[Gamma])];
{X, Sqrt[XX-Xˆ2]}
{1.2178*10ˆ-11, 0.226209}

Even though we found x̂ = 0 and  Ŝz  = 0 above, these coordinates are corre-
lated: calculating x̂ ⊗ Ŝz ,
Xz = Re[Conjugate[\[Gamma]].(KroneckerProduct[xop,sz].\[Gamma])]
0.0954168
5.1 One Particle in 1D with Spin 143

Reduced density matrix of the spatial degree of freedom: Using In[257] we


trace out the spin degree of freedom (the last two dimensions) to find the density
matrix in the spatial coordinate:
\[Rho]x = traceout[\[Gamma], -2];
ArrayPlot[Reverse[Transpose[ArrayPad[\[Rho]x/\[CapitalDelta], 1]]]]

Reduced density matrix of the spin degree of freedom: We can do the same for
the reduced matrix of the spin degree of freedom, using In[256], and find a 2
spin density matrix:
\[Rho]s = traceout[\[Gamma], nmax]
{{0.5, -0.233037}, {-0.233037, 0.5}}

Spin-specific spatial densities: The reduced density matrix of particles in the spin-
up state is found by projecting the ground state |γ onto the spin-up sector with
the projector  ˆ ↑ |γ only describes the
ˆ ↑ = |↑↑| = 1 1 + Ŝz .1 Thus, |γ↑  = 
2
particles that are in the spin-up state:
\[Gamma]up = KroneckerProduct[idx, ids/2+sz].\[Gamma];
\[Rho]xup = traceout[\[Gamma]up, -2];

In the same way the reduced density matrix of particles in the spin-down state
|γ↓  = ˆ ↓ |γ is calculated with the down-projector 
ˆ ↓ = |↓↓| = 1 1 − Ŝz :
2

\[Gamma]dn = KroneckerProduct[idx, ids/2-sz].\[Gamma];


\[Rho]xdn = traceout[\[Gamma]dn, -2];

1 Remember that 1 = |↑↑| + |↓↓| and Ŝz = 21 |↑↑| − 21 |↓↓|.


144 5 Combining Spatial Motion and Spin

The positive correlation between the spin and the mean position, x̂ ⊗ Ŝz  > 0,
is clearly visible in these plots.

Since  ˆ↑+ ˆ ↓ = 1, these two spin-specific spatial density matrices add up to


the total density shown previously. This also means that the spin-specific density
matrices do not have unit trace:
{Tr[\[Rho]xup], Tr[\[Rho]xdn]}
{0.5, 0.5}

Hence we have 50% chance of finding the particle in the up or down spin states.
Space-dependent spin expectation value: Similarly, we can calculate the reduced
density matrix of the spin degree of freedom at a specific point in space by using
projection operators ˆ j = | j j| onto single position-basis states | j:

\[Gamma]x[j_Integer /; 1 <= j <= nmax] :=


KroneckerProduct[SparseArray[{j, j} -> 1, {nmax, nmax}], ids].\[Gamma]
\[Rho]sx[j_Integer /; 1 <= j <= nmax] := traceout[\[Gamma]x[j], nmax]

We notice that, as before, these spatially-local reduced density matrices do not


have unit trace, but their traces sum up to 1:
Sum[Tr[\[Rho]sx[j]], {j, nmax}]
1.

In fact, the traces of these local reduced density matrices give the probability
of finding the particle at the given position. We can use this interpretation to
calculate the mean spin expectation value of a particle measured at a given grid
point:
meansx[j_Integer /; 1 <= j <= nmax] := Tr[\[Rho]sx[j].sz]/Tr[\[Rho]sx[j]]
ListLinePlot[Transpose[{xgrid, Table[meansx[j], {j, nmax}]}]]
5.1 One Particle in 1D with Spin 145

This graph confirms the observation that particles detected on the left side are
more likely to be in the |↓ state, while particles detected on the right side are
more likely to be in the |↑ state.

5.1.3 Exercises

Q5.1 In the problem described by the Hamiltonian of Eq. (5.5), calculate the follow-
ing expectation values (numerically) for several parameter sets {ω, F, Bx }:
1. x for particles detected in the |↑ state
2. x for particles detected in the |↓ state
3. x for particles detected in any spin state
4. the mean and variance of x̂ ⊗ Ŝz

5.2 One Particle in 2D with Spin: Rashba Coupling


[Supplementary Material 2]

A particularly interesting kind of interaction is the Rashba coupling between a parti-


cle’s momentum and its spin.2 In general, this interaction is proportional to a com-
ponent of the vector product κ ˆ For a particle moving in two dimensions
ˆ = pˆ × S.
(x, y), the coupling involves the z-component κ̂z = p̂x ⊗ Ŝ y − p̂ y ⊗ Ŝx .
In this section we study the 2D Rashba Hamiltonian

p̂x2 + p̂ 2y
Ĥ = + V (x, y) + δ Ŝz + α( p̂x ⊗ Ŝ y − p̂ y ⊗ Ŝx ) (5.16)
2m

in a square box where − a2 ≤ x, y ≤ a2 as before. With a Hilbert space composed as


the tensor product of the x, y, and spin coordinates, in this order, the full Hamiltonian
thus becomes

2 See https://en.wikipedia.org/wiki/Rashba_effect.
146 5 Combining Spatial Motion and Spin
     
2 a/2 ∂2 2 a/2 ∂2
Ĥ = − dx|x 2 x| ⊗ 1 ⊗ 1 + 1 ⊗ − dx|y 2 y| ⊗ 1
2m −a/2 ∂x 2m −a/2 ∂y
 
a/2
+ dxdy|x|yV (x, y)x|y| ⊗ 1 + δ(1 ⊗ 1 ⊗ Ŝz ) + α( p̂x ⊗ 1 ⊗ Ŝ y − 1 ⊗ p̂ y ⊗ Ŝx ).
−a/2
(5.17)

For simplicity we will set V (x, y) = 0; but any nonzero potential can be used with
the techniques introduced previously. Further, we use a = m = \[HBar]= 1 to simplify
the units; but as usual, any system of units may be used (see Sect. 4.1.1).
Since both the kinetic and the interaction operator are most easily expressed in the
momentum representation, we use the momentum representation (see Sect. 4.1.2) to
express the spatial degrees of freedom of the Hamiltonian. The identity operator is
nmax = 50;
\[CapitalDelta] = a/(nmax+1);
idM = IdentityMatrix[nmax, SparseArray];

We use the exact form of the kinetic operator from In[423] and the exact form of
the momentum operator from In[444]. As discussed previously, these two forms
do not exactly satisfy T̂ = p̂ 2 /(2m). They are, however, the best available low-energy
forms.
For the spin degree of freedom, we assume S = 1/2, giving us the usual spin
operators and the identity operator,
{sx,sy,sz} = Table[SparseArray[PauliMatrix[i]/2], {i, 3}];
idS = IdentityMatrix[2, SparseArray];

With these definitions, we assemble the Rashba Hamiltonian of Eq. (5.17) in the
momentum representation with
HM[\[Delta]_, \[Alpha]_] = KroneckerProduct[TM, idM, idS]
+ KroneckerProduct[idM, TM, idS]
+ \[Delta]*KroneckerProduct[idM, idM, sz]
+ \[Alpha]*(KroneckerProduct[pM, idM, sy] - KroneckerProduct[idM, pM, sx]);

Given a state \[Gamma], for example the ground state of In[629] for specific values of δ = 1
and α = 20, we calculate the mean value x̂ 2  = x̂ 2 ⊗ 1 ⊗ 1 with the position
operator xM expressed in the momentum basis:
xgrid = a*(Range[nmax]/(nmax + 1) - 1/2);
xP = SparseArray[Band[{1, 1}] -> xgrid];
xM = FourierDST[xP, 1];
Conjugate[\[Gamma]].(KroneckerProduct[xM.xM, idM, idS].\[Gamma]) //Re
0.0358875

In the same way, we calculate the mean value  ŷ 2  = 1 ⊗ ŷ 2 ⊗ 1:


Conjugate[\[Gamma]].(KroneckerProduct[idM, xM.xM, idS].\[Gamma]) //Re
0.0358875
5.2 One Particle in 2D with Spin: Rashba Coupling … 147

In order to study the spatial variation of the spin (the expectation value of the spin
degree of freedom if the particle is detected at a specific spatial location), we calculate
the reduced density matrix of the spin degree of freedom at a specific grid point
(xi , y j ) of the position grid.3 For this, we first project the ground-state wavefunction
\[Gamma] onto the spatial grid point at x = xi and y = y j using the projector |ii| ⊗ | j j|
in the momentum representation:
\[CapitalPi]P[j_] := SparseArray[{j, j} -> 1, {nmax, nmax}]
\[CapitalPi]M[j_] := FourierDST[\[CapitalPi]P[j], 1]
gP[i_,j_] := KroneckerProduct[\[CapitalPi]M[i], \[CapitalPi]M[j], idS].\[Gamma]

Tracing out the spatial degrees of freedom with the procedure of Sect. 2.4.3 gives
the 2 × 2 spin density matrix at the desired grid point,
RsP[i_, j_] := traceout[gP[i,j], nmaxˆ2]

The trace Tr[RsP[i,j]] of such a reduced density matrix gives the probability
of finding the particle at grid point (xi , y j ):

We can extract more information from these reduced spin density matrices: the mag-
netization (mean spin direction) at a grid point has the Cartesian components
mxP[i_, j_] := Re[Tr[RsP[i,j].sx]/Tr[RsP[i,j]]]
myP[i_, j_] := Re[Tr[RsP[i,j].sy]/Tr[RsP[i,j]]]
mzP[i_, j_] := Re[Tr[RsP[i,j].sz]/Tr[RsP[i,j]]]

3 Naturally,the following calculations would be simpler if we had represented the ground state in
the position basis; however, we use this opportunity to show how to calculate in the momentum
basis.
148 5 Combining Spatial Motion and Spin

Plotting these components over the entire grid shows interesting patterns of the mean
spin orientation (magnetization) in the ground state:

5.2.1 Exercises

Q5.2 While the Hamiltonian of Eqs. (5.16) and (5.17) contains the distinct opera-
tors p̂x and p̂ y , the Mathematica form of the this Hamiltonian assembled in
In[629] contains the same matrix pM representing both p̂x and p̂ y . Why
is this so? What distinguishes the Mathematica representations of these two
operators?

5.3 Phase-Space Dynamics in the Jaynes–Cummings Model


[Supplementary Material 3]

As a final example, we study the interaction of an atom with the light field in an
optical cavity. The atom is assumed to have only two internal states: the ground state
|g and some excited state |e. The atomic state is described as a (pseudo-)spin-1/2
system with the operators (see Q5.3)

|eg| + |ge| |eg| − |ge| |ee| − |gg|


Ŝx = , Ŝ y = , Ŝz = , (5.18)
2 2i 2

as well as Ŝ ± = Ŝx ± i Ŝ y (see Sect. 3.2). The cavity field is assumed to consist of only
one mode, described with creation and annihilation operators â † and â, respectively;
all other cavity modes are assumed to be so far off-resonant that they are not coupled
to the atom.
The Jaynes–Cummings Hamiltonian4 describing the combined system, as well as
the coupling between the atom and the cavity field, is

4 See https://en.wikipedia.org/wiki/Jaynes-Cummings_model.
5.3 Phase-Space Dynamics in the Jaynes–Cummings Model … 149

1
ĤJC = ωa Ŝz + ωc (â † â + ) + g( Ŝ + â + â † Ŝ − ) . (5.19)
 
 

atom
  2
coupling
cavity field

• The atomic Hamiltonian describes the energy difference ωa between the two
internal states of the atom.
• The cavity field Hamiltonian decribes the energy of n̂ = â † â photons in the
cavity mode, each photon carrying an energy ωc .
• The coupling term describes the deexcitation of the field â together with the
excitation of the atom Ŝ + , as well as the reverse process of the excitation of the
field â † together with the deexcitation of the atom Ŝ − (see Q5.4).
The cavity mode of the Jaynes–Cummings model is usually studied in the Fock basis
of definite photon number, using harmonic-oscillator eigenfunctions as basis states.
Here we take an alternative approach and look at the X − P phase space spanned by
the dimensionless quadrature operators X̂ and P̂,5 which are related to the creation
and annihilation operators â † and â via

â + â † X̂ + i P̂ X̂ + ∂∂X̂
X̂ = √ â = √ = √
2 2 2

∂ â − â † X̂ − i P̂ X̂ − ∂ X̂
P̂ = −i = √ â = √

= √ (5.20)
∂ X̂ i 2 2 2

with the commutators [â, â † ] = 1 and [ X̂ , P̂] = i (see Q5.5). We note that the quadra-
ture X̂ is the amplitude of the electromagnetic field of the cavity mode, and P̂ its
conjugate momentum; there is no motion in real space in this problem, only in
amplitude space. Using these quadrature operators, we write the Jaynes–Cummings
Hamiltonian as (see Q5.6)

1 1 √
ĤJC = ωa Ŝz + ωc ( P̂ 2 + X̂ 2 ) + 2g( X̂ Ŝx − P̂ Ŝ y )
2 2
1 2 1 2 √
= ωa 1 ⊗ Ŝz + ωc ( P̂ + X̂ ) ⊗ 1 + 2g( X̂ ⊗ Ŝx − P̂ ⊗ Ŝ y ), (5.21)
2 2
where we have made its tensor-product structure explicit in the second line. To
assemble this Hamiltonian in Mathematica, we define the Hilbert space to be the
tensor product of the X − P phase space and the spin-1/2 space, in this order.
The phase space is defined as before (see Chap. 4) in a calculation box X ∈
[− a2 , a2 ] divided into a grid of n max + 1 intervals. We choose a such that the state
fits well into the box (considering that the ground state of the cavity field has a

5 In a harmonic
 oscillator of mass m and angular frequency ω, we usually introduce the position


operator x̂ = mω X̂ and the momentum operator p̂ = mω P̂. Here we restrict our attention to
the dimensionless quadratures X̂ and P̂.
150 5 Combining Spatial Motion and Spin

1/2 1/2 √
size  X̂ 2  =  P̂ 2  = 1/ 2), and we choose n max such that the Wigner quasi-
probability distribution plots have equal ranges in X and P. Naturally, any other
values of a and n max can be chosen.
\[HBar] = 1; (* natural units *)
a = 10;
nmax = Round[aˆ2/\[Pi]]
32
\[CapitalDelta] = a/(nmax+1);

We represent the phase space in the position basis. The X̂ quadrature operator is
defined as in Eq. (4.24),
xgrid = a*(Range[nmax]/(nmax+1) - 1/2);
X = SparseArray[Band[{1, 1}] -> xgrid];

The definition of the P̂ quadrature operator follows In[444], with P̂ 2 defined


directly through In[423] for better accuracy at finite n max :
P = FourierDST[SparseArray[{n1_, n2_} /; OddQ[n1-n2] ->
4*I*n1*n2)/(a*(n2ˆ2-n1ˆ2)), {nmax, nmax}], 1];
P2 = FourierDST[SparseArray[Band[{1, 1}] -> Range[nmax]ˆ2*\[Pi]ˆ2/aˆ2], 1];

Finally, the phase-space identity operator is


idX = IdentityMatrix[nmax, SparseArray];

The operators on the pseudo-spin degree of freedom are defined directly from the
Pauli matrices instead of using the general definitions of Eqs. (3.1) and (3.2):
{Sx, Sy, Sz} = Table[SparseArray[PauliMatrix[i]/2], {i, 3}];
idS = IdentityMatrix[2, SparseArray];

The Hamiltonian of Eq. (5.21) is assembled from three parts:


Ha = KroneckerProduct[idX, Sz];
Hc = KroneckerProduct[X.X/2 + P2/2, idS];
Hint = Sqrt[2]*(KroneckerProduct[X,Sx]-Re[KroneckerProduct[P,Sy]]);
HP[\[Omega]a_, \[Omega]c_, g_] = \[HBar]*\[Omega]a*Ha + \[HBar]*\[Omega]c*Hc + \[HBar]*g*Hint;

Remember that we use P2 instead of P.P for the operator P̂ 2 for better accuracy.
We use the Re operator in In[655] to eliminate the imaginary parts, which are
zero by construction but render the expression Complex-valued nonetheless.
In the Mathematica notebook attached to this section, the dynamics induced by this
time-independent Hamiltonian is studied in the weak and strong coupling regimes,
using the technique of Sect. 2.3.4 to propagate the initial wavefunction.
Given a calculated space⊗spin wavefunction \[Psi] (a vector of 2n max complex num-
bers), we calculate the n max × n max reduced density matrix of the phase-space degree
of freedom (cavity field) with In[257], tracing out the spin degree of freedom (the
last 2 dimensions):
5.3 Phase-Space Dynamics in the Jaynes–Cummings Model … 151

\[Rho]X = traceout[\[Psi], -2];

Similarly, we calculate the 2 × 2 reduced density matrix of the spin degree of freedom
(atomic state) with In[257], tracing out the phase-space degree of freedom (the
first n max dimensions):
\[Rho]S = traceout[\[Psi], nmax];

Expectation values in the field or spin degrees of freedom are then easily calculated
from these reduced density matrices.
To illustrate these techniques, we calculate the time-dependent wavefunction in the
resonant weak-coupling
√ regime (ωa = ωc = 1, g = 0.1; initial state: coherent field
state at  X̂  = 2 and  P̂ = 0, spin down). First we show the time-dependence of
the atomic spin expectation values, calculated from a reduced spin density matrix
with
{Tr[\[Rho]S.Sx], Tr[\[Rho]S.Sy], Tr[\[Rho]S.Sz]}

Observations:
• At t = 0 we recognize the initial spin-down state:  Ŝx  =  Ŝ y  = 0 and  Ŝz  =
− 21 .
• The Sx and S y spin components rotate rapidly due to the pseudo-spin excitation
energy ωa (phase factor e−iωa t ). They are 90◦ out of phase.
• The Sz spin component has a complicated time dependence. Since the atomic
energy is ωa  Ŝz , this curve shows the energy flowing between the atom and
the cavity light field.
The phase-space Wigner quasi-probability distribution of the cavity field, calculated
using In[481] from the reduced phase space density matrix of In[657], using
the same weak-coupling conditions as above, is plotted here at two evolution times:
WignerDistributionPlot[\[Rho]X, {-a/2, a/2}]
152 5 Combining Spatial Motion and Spin

Observations:
• At t = 0 we recognize the initial state: a coherent √state (circular Gaussian of min-
imal area  X̂ 2  =  P̂ 2  = 21 ) displaced by δ = 2 in the X -direction, implying
δ 2 /2 = 1 photon present initially.
• At t = 100 the structure of the Wigner distribution has taken on a qualitatively
different shape, including a significant negative-valued region. Such negative
regions are forbidden in classical phase-space distributions and hence indicate
an essentially quantum-mechanical state.

5.3.1 Exercises

Q5.3 Show that the operators of Eq. (5.18) represent a pseudo-spin-1/2, like in Q3.2.
Q5.4 Express Ŝ ± in terms of |g, |e, g|, and e|.
Q5.5 Show that [ X̂ , P̂] = i using Eq. (5.20) and assuming that [â, â † ] = 1.
Q5.6 Show that Eq. (5.21) follows from Eq. (5.19) using Eq. (5.20).
List of Notebooks in Supplementary Material

1. MathematicaExample.nb—a simple example .............................. vii


2. FactorialDefinitions.nb—many ways to define the factorial function..... 19
3. ReducedDensityMatrix.nb—reduced density matrices .................... 46
4. SpinOperators.nb—spin and angular momentum operators ............... 52
5. Electron.nb—spin-1/2 electron in a dc magnetic field ...................... 54
6. Rubidium87.nb—87 Rb hyperfine structure................................... 56
7. IsingModel.nb—Ising model in a transverse field ........................... 68
8. QuantumCircuits.nb—quantum gates and quantum circuits .............. 80
9. GravityWell.nb—gravity well ................................................. 102
10. WignerDistribution.nb—the Wigner quasi-probability distribution ...... 107
11. ParticleDynamics.nb—single-particle dynamics in 1D .................... 109
12. GroundState.nb—non-linear Schrödinger equation
and imaginary-time propagation in 1D ........................................ 119
13. ContactInteraction.nb—two particles in 1D with contact interaction.... 122
14. 3DBEC.nb—Bose-Einstein condensate in 3D ............................... 130
15. ParticleMotionWithSpin.nb—one particle in 1D with spin ............... 141
16. RashbaCoupling.nb—one particle in 2D with Rashba coupling .......... 145
17. JaynesCummingsModel.nb—phase-space dynamics
in the Jaynes-Cummings model ................................................ 148

© Springer Nature Singapore Pte Ltd. 2020 153


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0
Solutions to Exercises

Chapter 1 Wolfram Language Overview

Q1.1
N[Zeta[3]]
1.20206

Q1.2
%ˆ2
1.44494

Q1.3
Integrate[Sin[x]*Exp[-x], {x, 0, Infinity}]
1/2

Q1.4
N[\[Pi], 1000]
3.141592653589793238462643383279502884197169399375105820974944592307816406286
20899862803482534211706798214808651328230664709384460955058223172535940812848
11174502841027019385211055596446229489549303819644288109756659334461284756482
33786783165271201909145648566923460348610454326648213393607260249141273724587
00660631558817488152092096282925409171536436789259036001133053054882046652138
41469519415116094330572703657595919530921861173819326117931051185480744623799
62749567351885752724891227938183011949129833673362440656643086021394946395224
73719070217986094370277053921717629317675238467481846766940513200056812714526
35608277857713427577896091736371787214684409012249534301465495853710507922796
89258923542019956112129021960864034418159813629774771309960518707211349999998
37297804995105973173281609631859502445945534690830264252230825334468503526193
11881710100031378387528865875332083814206171776691473035982534904287554687311
59562863882353787593751957781857780532171226806613001927876611195909216420199

Q1.5
ClebschGordan[{100, 10}, {200, -12}, {110, -2}]
8261297798499109361013742279092521767681*
Sqrt[769248995636473/297224869222895274740285232180446271746289127347456291479
57669733897130076853320942746928207329]/14
% //N
0.0949317

© Springer Nature Singapore Pte Ltd. 2020 155


R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0
156 Solutions to Exercises

Q1.6
Limit[Sin[x]/x, x -> 0]
1

Q1.7
Plot[Sin[x]/x, {x, -20, 20}, PlotRange -> All]

Q1.8
F[c_, imax_] := Abs[NestWhile[#ˆ2+c&, 0., Abs[#] <= 2 &, 1, imax]] <= 2
With[{n = 100, imax = 1000},
Graphics[Raster[Table[Boole[!F[x+I*y,imax]],{y,-2,2,1/n},{x,-2,2,1/n}]]]]

Q1.9
MandelbrotSetPlot[]
Solutions to Exercises 157

Q1.10 In general, the definition of x depends on the values of u and v at the time of
the definition of x, whereas y depends on the values at the time of using the symbol
y. The second case below, however, needs special attention since the values of u and
v are not defined at the time when x is defined.
• When u and v are already defined before x and y are defined, then x and y
return the same value:
Clear[x, y, u, v];
u = 3; v = 7;
x = u+v; y := u+v;
{x, y}
{10, 10}
?x
x=10
?y
y:=u+v

• When u and v are defined after x and y are defined, then x and y also return
the same value. Notice, however, that the definition of x is not static and thus
depends on the values of u and v at the time of usage:
Clear[x, y, u, v];
x = u+v; y := u+v;
u = 3; v = 7;
{x, y}
{10, 10}
?x
x=u+v
?y
y:=u+v
158 Solutions to Exercises

• When u and v change values after x and y are defined, then x and y differ
since only y reflects the new values of u and v:
Clear[x, y, u, v];
u = 3; v = 7;
x = u+v; y := u+v;
u = 8; v = 9;
{x, y}
{10, 17}
?x
x=10
?y
y:=u+v

Q1.11
N[E]
2.71828
N@E
2.71828
E //N
2.71828

Q1.12
Total[Range[123, 9968]]
49677993

Q1.13
Module[{i},
i = 123;
s = 0;
While[s <= 10000, s += i; i++];
i - 1]
187

Q1.14
f = #1*#2*#3 &;

Q1.15
a = {0.1, 0.9, 2.25, -1.9};
sa = Map[Sin[#]ˆ2 &, a]
{0.00996671, 0.613601, 0.605398, 0.895484}

Q1.16 The Total function is the same as applying Plus to a list:


Apply[Plus, sa]
2.12445
Plus@@sa
2.12445
Total[sa]
2.12445

Q1.17 All built-in symbols, like Echo, are protected in order to prevent accidental
modification. Trying to modify Echo without unprotecting it first gives an error:
Echo = #1 &
Set: Symbol Echo is Protected.
#1 &
Solutions to Exercises 159

Q1.18 See In[128] and In[129]: the full forms of a/b and x_/y_ are similar
and match,
FullForm[a/b]
Times[a, Power[b, -1]]
FullForm[x_/y_]
Times[Pattern[x, Blank[]], Power[Pattern[y, Blank[]], -1]]

while the full form of 2/3 is different and does not match the pattern for replacements,
FullForm[2/3]
Rational[2, 3]

Q1.19 Not all delayed assignments can be replaced by immediate ones. Whenever
an immediate assignment can be used, it tends to be faster.
1. = and := work equally well.
2. = and := work equally well.
3. = and := work equally well.
4. = and := work equally well. There is a significant difference though: while the
delayed assignment executes as a product, the immediate assignment is simpli-
fied at the moment of definition to a factorial, which then executes much faster:
f[n_] = Product[i, {i, n}]
n!

5. Immediate assignment breaks the recursion, which cannot be executed at defini-


tion time.
6. = and := work equally well.
7. = and := work equally well.
8. Immediate assignment breaks the Do loop, which cannot be executed at definition
time.
9. Immediate assignment breaks the For loop: since n is not defined at definition
time, the comparison i<=n fails at the first iteration and the result is always
f[n_]=1.
10. Immediate assignment breaks the Range command since n is not defined at
definition time.
11. Immediate assignment breaks the Range command since n is not defined at
definition time.
12. Immediate assignment breaks the Array command since n is not defined at
definition time.
13. Immediate assignment breaks the Range command since n is not defined at
definition time.
14. Immediate assignment always gives f[n_]=1 since the repeated replacement
fails.
15. Immediate assignment breaks the Range command since n is not defined at
definition time.
16. Immediate assignment always gives f[n_]=1 since the repeated replacement
fails.
160 Solutions to Exercises

17. = and := work equally well.


Q1.20 Not all delayed rules can be replaced by immediate ones. Whenever an
immediate rule can be used, it tends to be faster.
14. -> and :> work equally well.
16. Immediate rule (->) breaks the Table command since m is not defined at
definition time.
Q1.21 In the recursive definitions 5 and 6, memoization gives a dramatic speedup,
as it remembers intermediate results in the recursion. In the other examples, memo-
ization only helps when the function is called repeatedly with the same argument.
Q1.22 Using a built-in function:
Table[Fibonacci[n], {n, 100}]

Even more directly, by using the Listable attribute of the Fibonacci function:

Fibonacci[Range[100]]

Recursive with memoization:


g[1] = g[2] = 1;
g[n_] := g[n] = g[n-1] + g[n-2]
Table[g[n], {n, 100}]

Iterative construction of the list:


L = {1, 1};
Do[AppendTo[L, L[[-1]] + L[[-2]]], {98}];
L

Q1.23 The eigenvectors are orthogonal, but not necessarily normalized.


Eigensystem of σ̂x :
{eval, evec} = Eigensystem[PauliMatrix[1]]
{{-1, 1}, {{-1, 1}, {1, 1}}}
Normalize /@ evec
{{-1/Sqrt[2], 1/Sqrt[2]}, {1/Sqrt[2], 1/Sqrt[2]}}

Eigensystem of σ̂ y :
{eval, evec} = Eigensystem[PauliMatrix[2]]
{{-1, 1}, {{I, 1}, {-I, 1}}}
Normalize /@ evec
{{I/Sqrt[2], 1/Sqrt[2]}, {-I/Sqrt[2], 1/Sqrt[2]}}

Eigensystem of σ̂z :
{eval, evec} = Eigensystem[PauliMatrix[3]]
{{-1, 1}, {{0, 1}, {1, 0}}}

Q1.24 The tensor index dimensions do not match:


TensorContract[u, {3, 4}]
TensorContract: Contraction levels {3,4} have different dimensions
{3,2}.
Solutions to Exercises 161

Chapter 2 Quantum Mechanics: States and Operators

Q2.1 We use the computational basis {|↑, |↓}, in which the two given basis func-
tions are
up[\[Theta]_,\[Phi]] = {Cos[\[Theta]/2], Eˆ(I*\[Phi])*Sin[\[Theta]/2]};
dn[\[Theta]_,\[Phi]] = {-Eˆ(-I*\[Phi])*Sin[\[Theta]/2], Cos[\[Theta]/2]};

The corresponding ⇑ϑ,ϕ | and ⇓ϑ,ϕ | are calculated with Conjugate (see
Sect. 1.11).
1. Calculate ⇑ϑ,ϕ |⇑ϑ,ϕ  = 1, ⇑ϑ,ϕ |⇓ϑ,ϕ  = 0, ⇓ϑ,ϕ |⇑ϑ,ϕ  = 0, ⇓ϑ,ϕ |⇓ϑ,ϕ  = 1:

Conjugate[up[\[Theta],\[Phi]]].up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify


1
Conjugate[up[\[Theta],\[Phi]]].dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
0
Conjugate[dn[\[Theta],\[Phi]]].up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
0
Conjugate[dn[\[Theta],\[Phi]]].dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
1

2. Construct the ket-bra products with KroneckerProduct:


KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] +
KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] //
ComplexExpand //FullSimplify
{{1, 0}, {0, 1}}

3. |↑ = |⇑ϑ,ϕ ⇑ϑ,ϕ |↑ + |⇓ϑ,ϕ ⇓ϑ,ϕ |↑ = cos(ϑ/2)|⇑ϑ,ϕ  − eiϕ sin(ϑ/2)|⇓ϑ,ϕ :
Cos[\[Theta]/2]*up[\[Theta],\[Phi]] - Eˆ(I*\[Phi])*Sin[\[Theta]/2]*dn[\[Theta],\[Phi]] //FullSimplify
{1, 0}

|↓ = |⇑ϑ,ϕ ⇑ϑ,ϕ |↓ + |⇓ (ϑ, ϕ)⇓ϑ,ϕ |↓ = e−iϕ sin(ϑ/2)|⇑ϑ,ϕ  + cos(ϑ/2)
|⇓ϑ,ϕ :
Eˆ(-I*\[Phi])*Sin[\[Theta]/2]*up[\[Theta],\[Phi]] + Cos[\[Theta]/2]*dn[\[Theta],\[Phi]] //FullSimplify
{0, 1}

4. The Pauli operators are defined in Mathematica in our computational basis with
the PauliMatrix command.
The matrix elements of the Pauli operator σ̂x are
sx = PauliMatrix[1];
Conjugate[up[\[Theta],\[Phi]]].sx.up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Sin[\[Theta]]*Cos[\[Phi]]
Conjugate[up[\[Theta],\[Phi]]].sx.dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Exp[-I*\[Phi]]*(Cos[\[Theta]]*Cos[\[Phi]]+I*Sin[\[Phi]])
Conjugate[dn[\[Theta],\[Phi]]].sx.up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Exp[I*\[Phi]]*(Cos[\[Theta]]*Cos[\[Phi]]-I*Sin[\[Phi]])
Conjugate[dn[\[Theta],\[Phi]]].sx.dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
-Sin[\[Theta]]*Cos[\[Phi]]
sx == Sin[\[Theta]]*Cos[\[Phi]] * KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] +
Eˆ(-I*\[Phi])*(Cos[\[Theta]]*Cos[\[Phi]]+I*Sin[\[Phi]]) *
KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] +
Eˆ(I*\[Phi])*(Cos[\[Theta]]*Cos[\[Phi]]-I*Sin[\[Phi]]) *
KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] -
Sin[\[Theta]]*Cos[\[Phi]] * KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] //
ComplexExpand //FullSimplify
True
162 Solutions to Exercises

The matrix elements of the Pauli operator σ̂ y are


sy = PauliMatrix[2];
Conjugate[up[\[Theta],\[Phi]]].sy.up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Sin[\[Theta]]*Sin[\[Phi]]
Conjugate[up[\[Theta],\[Phi]]].sy.dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Exp[-I*\[Phi]]*(Cos[\[Theta]]*Sin[\[Phi]]-I*Cos[\[Phi]])
Conjugate[dn[\[Theta],\[Phi]]].sy.up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Exp[I*\[Phi]]*(Cos[\[Theta]]*Sin[\[Phi]]+I*Cos[\[Phi]])
Conjugate[dn[\[Theta],\[Phi]]].sy.dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
-Sin[\[Theta]]*Sin[\[Phi]]
sy == Sin[\[Theta]]*Sin[\[Phi]] * KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] +
Eˆ(-I*\[Phi])*(Cos[\[Theta]]*Sin[\[Phi]]-I*Cos[\[Phi]]) *
KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] +
Eˆ(I*\[Phi])*(Cos[\[Theta]]*Sin[\[Phi]]+I*Cos[\[Phi]]) *
KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] -
Sin[\[Theta]]*Sin[\[Phi]] * KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] //
ComplexExpand //FullSimplify
True

The matrix elements of the Pauli operator σ̂z are


sz = PauliMatrix[3];
Conjugate[up[\[Theta],\[Phi]]].sz.up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
Cos[\[Theta]]
Conjugate[up[\[Theta],\[Phi]]].sz.dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
-Exp[-I*\[Phi]]*Sin[\[Theta]]
Conjugate[dn[\[Theta],\[Phi]]].sz.up[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
-Exp[I*\[Phi]]*Sin[\[Theta]]
Conjugate[dn[\[Theta],\[Phi]]].sz.dn[\[Theta],\[Phi]] //ComplexExpand //FullSimplify
-Cos[\[Theta]]
sz == Cos[\[Theta]] * KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] -
Eˆ(-I*\[Phi])*Sin[\[Theta]] * KroneckerProduct[up[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] -
Eˆ(I*\[Phi])*Sin[\[Theta]] * KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[up[\[Theta],\[Phi]]]] -
Cos[\[Theta]] * KroneckerProduct[dn[\[Theta],\[Phi]], Conjugate[dn[\[Theta],\[Phi]]]] //
ComplexExpand //FullSimplify
True

5. We check the eigenvalue equations with eigenvalues ±1:


s = sx*Sin[\[Theta]]*Cos[\[Phi]] + sy*Sin[\[Theta]]*Sin[\[Phi]] + sz*Cos[\[Theta]];
Eigenvalues[s]
{-1, 1}
s.up[\[Theta],\[Phi]] == up[\[Theta],\[Phi]] //FullSimplify
True
s.dn[\[Theta],\[Phi]] == -dn[\[Theta],\[Phi]] //FullSimplify
True

Q2.2

1. Since ∞ n=1 |nn|
= 1, we have  P∞ (x,
y) = x|1|y = x|y = δ(x − y).
n max n max n max
2. Pn max (x, y) = x| n=1 |nn| |y = n=1 x|nn|y = 2 n=1 sin(nπ x)
sin(nπ y):
With[{nmax = 10},
P[x_, y_] = 2*Sum[Sin[n*\[Pi]*x]*Sin[n*\[Pi]*y], {n, nmax}];
DensityPlot[P[x, y], {x, 0, 1}, {y, 0, 1},
PlotRange -> All, PlotPoints -> 2*nmax]]
Solutions to Exercises 163

 max
3. The operator  ˆ n max = nn=1 |nn| is the projector onto the computational sub-
space (see Sect. 2.1.1). The function Pn max (x, y) = x| ˆ n max |y is its real-space
representation. Since the plot of Pn max (x, y) has a finite spatial resolution (i.e.,
no structure at length scales smaller than 1/n max ), we see that this projection
operator ˆ n max is associated with a spatial smoothing operation.

Q2.3 See Q2.1.


Q2.4 Inserting Eq. (2.32) into Eq. (2.17) gives the quantum state
  
i t
|ψ(t) = exp − Ĥ(s)ds |ψ(t0 ) (1)
 t0


g(x)
We calculate its time-derivative with the chain rule and d
dx f (x) h(x, y)dy =
g(x)
h(x, g(x))g (x) − h(x, f (x)) f (x) + f (x) ∂h(x,y)
∂x
dy:
  
d i i t i
|ψ(t) = − Ĥ(t) exp − Ĥ(s)ds |ψ(t0 ) = − Ĥ(t)|ψ(t), (2)
dt   t0 

which is the Schrödinger equation (2.16).


Q2.5 The Hamiltonian is
{sx,sy,sz} = Table[PauliMatrix[i], {i, 3}];
H = Sin[\[Theta]]*Cos[\[Phi]]*sx + Sin[\[Theta]]*Sin[\[Phi]]*sy + Cos[\[Theta]]*sz //FullSimplify
{{Cos[\[Theta]], Eˆ(-I*\[Phi])*Sin[\[Theta]]}, {Eˆ(I*\[Phi])*Sin[\[Theta]], -Cos[\[Theta]]}}
164 Solutions to Exercises

and the propagator is calculated from Eq. (2.34)


U = MatrixExp[-I*(t-t0)/\[HBar]*H] //FullSimplify
{{Cos[(t-t0)/\[HBar]]-I*Cos[\[Theta]]*Sin[(t-t0)/\[HBar]], -I*Eˆ(-I*\[Phi])*Sin[\[Theta]]*Sin[(t-t0)/\[HBar]]},
{-I*Eˆ(I*\[Phi])*Sin[\[Theta]]*Sin[(t-t0)/\[HBar]], Cos[(t-t0)/\[HBar]]+I*Cos[\[Theta]]*Sin[(t-t0)/\[HBar]]}}

Q2.6 The definitions are ordered with decreasing specificity:


?U
Global‘U
U[\[Tau]_?NumericQ] := MatrixExp[-I H N[\[Tau]]]
U[\[Tau]_] = MatrixExp[-I H \[Tau]]

In this way, the more general definition In[234] does not override the more specific
definition In[235].
Q2.7
1. The Hamiltonian is

2 ∂ ∂ ∂ ∂ ∂ ∂
Ĥ = − + + + + +
2m ∂x12 ∂y12 ∂z 12 ∂x22 ∂y22 ∂z 22
1
+ mω2 (x12 + y12 + z 12 + x22 + y22 + z 22 ) + gδ(x1 − x2 )δ(y1 − y2 )δ(z 1 − z 2 )
2
(3)

2. For example, we could use the harmonic-oscillator basis functions that diagonal-
ize the six degrees of freedom in the absence of coupling (g = 0): the states |n
for which
 
2 ∂ 1 1
− + mω 2 2
x |n = ω(n + )|n, (4)
2m ∂x 2 2 2

with n ∈ N. Explicitly, the position representations of these states are

Hn (x/x0 ) − 2xx 2
2
−1/2
x|n = φn (x) = x0 √ e 0. (5)
2n n! π

For the six degrees of freedom we therefore propose the basis functions |n x1 , n y1 ,
n z1 , n x2 , n y2 , n z2 .
Solutions to Exercises 165

3. The matrix elements are

n x1 , n y1 , n z 1 , n x2 , n y2 , n z 2 |Ĥ|n x1 , n y1 , n z 1 , n x2 , n y2 , n z 2 
= ωδn x1 ,n x δn y1 ,n y δn z1 ,n z δn x2 ,n x δn y2 ,n y δn z2 ,n z (n x1 + n y1 + n z 1 + n x2 + n y2 + n z 2 + 3)
1 1 1 2 2 2
 ∞
+ gn x1 , n y1 , n z 1 , n x2 , n y2 , n z 2 | dx1 dy1 dz 1 dx2 dy2 dz 2 |x1 x1 | ⊗ |y1 y1 | ⊗ |z 1 z 1 | ⊗ |x2 x2 | ⊗ |y2 
−∞
y2 | ⊗ |z 2 z 2 |δ(x1 − x2 )δ(y1 − y2 )δ(z 1 − z 2 )|n x1 , n y1 , n z 1 , n x2 , n y2 , n z 2 
= ωδn x1 ,n x δn y1 ,n y δn z1 ,n z δn x2 ,n x δn y2 ,n y δn z2 ,n z (n x1 + n y1 + n z 1 + n x2 + n y2 + n z 2 + 3)
1 1 1 2 2 2
 ∞
+g dx1 dy1 dz 1 dx2 dy2 dz 2 δ(x1 − x2 )δ(y1 − y2 )δ(z 1 − z 2 )
−∞
× φn x1 (x1 )φn x (x1 )φn y1 (y1 )φn y (y1 )φn z1 (z 1 )φn z (z 1 )φn x2 (x2 )φn x (x2 )φn y2 (y2 )φn y (y2 )φn z2 (z 2 )φn z (z 2 )
1 1 1 2 2 2

= ωδn x1 ,n x δn y1 ,n y δn z1 ,n z δn x2 ,n x δn y2 ,n y δn z2 ,n z (n x1 + n y1 + n z 1 + n x2 + n y2 + n z 2 + 3)
1 1 1 2 2 2
 ∞   ∞ 
+g dxφn x1 (x)φn x (x)φn x2 (x)φn x (x) dyφn y1 (y)φn y (y)φn y2 (y)φn y (y)
1 2 1 2
−∞ −∞
 ∞ 
dzφn z1 (z)φn z (z)φn z2 (z)φn z (z)
1 2
−∞
= ωδn x1 ,n x δn y1 ,n y δn z1 ,n z δn x2 ,n x δn y2 ,n y δn z2 ,n z (n x1 + n y1 + n z 1 + n x2 + n y2 + n z 2 + 3)
1 1 1 2 2 2
g
+ Rn ,n ,n ,n Rn ,n ,n ,n Rn ,n ,n ,n .
x03 x1 x1 x2 x2 y1 y1 y2 y2 z1 z1 z2 z2
(6)

The required dimensionless integrals over products of four harmonic-oscillator


eigenstates,
 ∞  ∞ Ha (ξ )Hb (ξ )Hc (ξ )Hd (ξ ) −2ξ 2
Ra,b,c,d = x0 dxφa (x)φb (x)φc (x)φd (x) = dξ √ e ,
−∞ −∞ π 2a+b+c+d a!b!c!d!
(7)
can either be calculated by analytic integration,
\[Phi][n_, x_] = HermiteH[n, x]/Sqrt[2ˆn*n!*Sqrt[\[Pi]]]*Eˆ(-xˆ2/2);
R[a_Integer/;a>=0, b_Integer/;b>=0, c_Integer/;c>=0, d_Integer/;d>=0] :=
Integrate[\[Phi][a,x]*\[Phi][b,x]*\[Phi][c,x]*\[Phi][d,x], {x, -\[Infinity], \[Infinity]}]

or by an explicit but hypergeometric formula1 (much faster),


R[a_Integer/;a>=0, b_Integer/;b>=0, c_Integer/;c>=0, d_Integer/;d>=0] :=
If[OddQ[a+b+c+d], 0,
1/\[Pi]*(-1)ˆ((a+b-c+d)/2)*Sqrt[c!/(2a!b!d!)]*
Gamma[(1+a-b+c-d)/2]*Gamma[(1-a+b+c-d)/2]*
HypergeometricPFQRegularized[{(1+a-b+c-d)/2,(1-a+b+c-d)/2,-d},
{1+c-d,(1-a-b+c-d)/2},1]]

Q2.8
\[Psi] = Flatten[Table[\[Psi]1[[i1]]*\[Psi]2[[i2]]*\[Psi]3[[i3]],
{i1, Length[\[Psi]1]}, {i2, Length[\[Psi]2]}, {i3, Length[\[Psi]3]}]]

1 See http://www.ph.unimelb.edu.au/˜jnnewn/cm-seminar-results/report/AnalyticIntegralOfFour
Hermites.pdf .
166 Solutions to Exercises

Q2.9
A = Flatten[Table[a1[[i1,j1]]*a2[[i2,j2]]*a3[[i3,j3]],
{i1, Length[a1]}, {i2, Length[a2]}, {i3, Length[a3]},
{j1, Length[Transpose[a1]]}, {j2, Length[Transpose[a2]]},
{j3, Length[Transpose[a3]]}], {{1,2,3}, {4,5,6}}]

Q2.10 Manual calculation:

|ψ = [0.8|↑ − 0.6|↓] ⊗ [0.6i|↑ + 0.8|↓] = 0.48i|↑↑ + 0.64|↑↓ − 0.36i|↓↑ − 0.48|↓↓, (8)

where |↑↓ = |↑ ⊗ |↓ etc. In Mathematica, using the computational basis
{|↑, |↓}, in this order:
\[Psi]1 = {0.8, -0.6};
\[Psi]2 = {0.6*I, 0.8};
\[Psi] = Flatten[KroneckerProduct[\[Psi]1, \[Psi]2]]
{0.+0.48*I, 0.64+0.*I, 0.-0.36*I, -0.48+0.*I}

The ordering of the joint basis in the Kroneckerproduct result is therefore


{|↑↑, |↑↓, |↓↑, |↓↓}.
Q2.11 We calculate the reduced density matrices with the traceout command
of In[256] and In[257]:
\[Rho]1 = traceout[\[Psi], -2]
{{0.64+0.*I, -0.48+0.*I}, {-0.48+0.*I, 0.36+0.*I}}
\[Rho]2 = traceout[\[Psi], 2]
{{0.36+0.*I, 0.+0.48*I}, {0.-0.48*I, 0.64+0.*I}}

Since |ψ is a product state, these reduced density matrices are equal to the pure
states of the subsystems:
\[Rho]1 == KroneckerProduct[\[Psi]1, Conjugate[\[Psi]1]]
True
\[Rho]2 == KroneckerProduct[\[Psi]2, Conjugate[\[Psi]2]]
True

Chapter 3 Spin and Angular Momentum

Q3.1
sx[1/2] == 1/2*PauliMatrix[1]
True
sy[1/2] == 1/2*PauliMatrix[2]
True
sz[1/2] == 1/2*PauliMatrix[3]
True
Solutions to Exercises 167

Q3.2 We only check up to S = 10:


1. commutators:
Table[sx[S].sy[S]-sy[S].sx[S] == I*sz[S], {S, 0, 10, 1/2}]
{True,True,True,True,True,True,True,True,True,True,True,True,True,
True,True,True,True,True,True,True,True}
Table[sy[S].sz[S]-sz[S].sy[S] == I*sx[S], {S, 0, 10, 1/2}]
{True,True,True,True,True,True,True,True,True,True,True,True,True,
True,True,True,True,True,True,True,True}
Table[sz[S].sx[S]-sx[S].sz[S] == I*sy[S], {S, 0, 10, 1/2}]
{True,True,True,True,True,True,True,True,True,True,True,True,True,
True,True,True,True,True,True,True,True}

2. spin length:
Table[sx[S].sx[S]+sy[S].sy[S]+sz[S].sz[S] == S*(S+1)*id[S], {S,0,10,1/2}]
{True,True,True,True,True,True,True,True,True,True,True,True,True,
True,True,True,True,True,True,True,True}

3. Make sure to quit the Mathematica kernel before loading the spin-operator def-
initions and executing the following commands. On a MacBook Pro (Retina,
13-inch, Early 2015) with a 3.1 GHz Intel Core i7 CPU and 16 GB 1867 MHz
DDR3 RAM, the limit is around S = 105 for all verifications:
s=100000;
sx[s].sy[s]-sy[s].sx[s] == I*sz[s] //Timing
{54.3985, True}
sy[s].sz[s]-sz[s].sy[s] == I*sx[s] //Timing
{58.4917, True}
sz[s].sx[s]-sx[s].sz[s] == I*sy[s] //Timing
{57.8856, True}
sx[s].sx[s]+sy[s].sy[s]+sz[s].sz[s] == s*(s+1)*id[s] //Timing
{33.5487, True}

Q3.3 The expressions rapidly increase in complexity with increasing S:


n = {Sin[\[Theta]]*Cos[\[Phi]], Sin[\[Theta]]*Sin[\[Phi]], Cos[\[Theta]]};
With[{S=0}, MatrixExp[-I*\[Alpha]*n.{sx[S],sy[S],sz[S]}] //FullSimplify]
{{1}}
With[{S=1/2}, MatrixExp[-I*\[Alpha]*n.{sx[S],sy[S],sz[S]}] //FullSimplify]
{{Cos[\[Alpha]/2]-I*Cos[\[Theta]]*Sin[\[Alpha]/2], Sin[\[Alpha]/2]*Sin[\[Theta]]*(-I*Cos[\[Phi]]-Sin[\[Phi]])},
{Sin[\[Alpha]/2]*Sin[\[Theta]]*(-I*Cos[\[Phi]]+Sin[\[Phi]]), Cos[\[Alpha]/2]+I*Cos[\[Theta]]*Sin[\[Alpha]/2]}}
% /. \[Alpha] -> 0
{{1, 0}, {0, 1}}
With[{S=1}, MatrixExp[-I*\[Alpha]*n.{sx[S],sy[S],sz[S]}] //FullSimplify]
{{(Cos[\[Alpha]/2]-I*Cos[\[Theta]]*Sin[\[Alpha]/2])ˆ2,
Eˆ(-I*\[Phi])*((-1+Cos[\[Alpha]])*Cos[\[Theta]]-I*Sin[\[Alpha]])*Sin[\[Theta]]/Sqrt[2],
-Eˆ(-2I*\[Phi])*Sin[\[Alpha]/2]ˆ2*Sin[\[Theta]]ˆ2},
{Sqrt[2]*Eˆ(-I*\[Alpha])*Sin[\[Alpha]/2]*(Cos[\[Alpha]/2]-I*Cos[\[Theta]]*Sin[\[Alpha]/2])
*Sin[\[Theta]]*(-I*Cos[\[Alpha]+\[Phi]]+Sin[\[Alpha]+\[Phi]]),
Cos[\[Alpha]/2]ˆ2+Cos[2\[Theta]]*Sin[\[Alpha]/2]ˆ2,
Eˆ(-I*\[Phi])*(Cos[\[Theta]]-Cos[\[Alpha]]*Cos[\[Theta]]-I*Sin[\[Alpha]])*Sin[\[Theta]]/Sqrt[2]},
{-Eˆ(2I*\[Phi])*Sin[\[Alpha]/2]ˆ2*Sin[\[Theta]]ˆ2,
-Eˆ(I*\[Phi])*((-1+Cos[\[Alpha]])*Cos[\[Theta]]+I*Sin[\[Alpha]])*Sin[\[Theta]]/Sqrt[2],
(Cos[\[Alpha]/2]+I*Cos[\[Theta]]*Sin[\[Alpha]/2])ˆ2}}
% /. \[Alpha] -> 0
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}

Q3.4 In the unit system of In[275] we have


{eval, evec} = Eigensystem[H[Quantity[1,"Teslas"]/MagneticFieldUnit, 0, 0]]
{{-14012.476, 14012.476}, {{-0.7071068, 0.7071068}, {0.7071068, 0.7071068}}}
168 Solutions to Exercises

To convert the energy eigenvalues to Joules (or Yoctojoules), we use


UnitConvert[eval*EnergyUnit, "Yoctojoules"]
{-9.284765 Yoctojoules, 9.284765 Yoctojoules}

The corresponding eigenvectors are in the ±x direction:


|↑−|↓
• ground state: E − = −9.28 × 10−24 J = −9.28yJ; |ψ−  = |−x = √
2
|↑+|↓
• excited state: E + = +9.28 × 10−24 J = +9.28yJ; |ψ+  = |+x = √
2

Q3.5 See also Q2.1 and Q2.3.


Bvec = B*{Sin[\[Theta]]*Cos[\[Phi]], Sin[\[Theta]]*Sin[\[Phi]], Cos[\[Theta]]};
Svec = {sx[1/2], sy[1/2], sz[1/2]};
H = -\[Mu]B*ge*Bvec.Svec //FullSimplify;
{eval, evec} = Eigensystem[H];
eval
{-B*ge*\[Mu]B/2, B*ge*\[Mu]B/2}
Assuming[0<\[Theta]<\[Pi], ComplexExpand[Normalize /@ evec] //FullSimplify]
{{Eˆ(-I*\[Phi])*Cos[\[Theta]/2], Sin[\[Theta]/2]}, {-Eˆ(-I*\[Phi])*Sin[\[Theta]/2], Cos[\[Theta]/2]}}

Q3.6 We define all operators in the combined Hilbert space of both spins, so that
ˆ can be defined by addition:
the operators F
With[{i=3, j=5},
Ix = KroneckerProduct[sx[i], id[j]];
Iy = KroneckerProduct[sy[i], id[j]];
Iz = KroneckerProduct[sz[i], id[j]];
Jx = KroneckerProduct[id[i], sx[j]];
Jy = KroneckerProduct[id[i], sy[j]];
Jz = KroneckerProduct[id[i], sz[j]];
Fx=Ix+Jx; Fy=Iy+Jy; Fz=Iz+Jz;]

We calculate the eigenvalues in ascending order with Sort. Remember that |I −


J | ≤ F ≤ I + J , and therefore F ∈ {2, 3, 4, 5, 6, 7, 8} and  F̂ 2  = F(F + 1) ∈
{6, 12, 20, 30, 42, 56, 72}.
Ix.Ix + Iy.Iy + Iz.Iz //Eigenvalues //Sort
{12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,
12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,
12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12}
Iz //Eigenvalues //Sort
{-3,-3,-3,-3,-3,-3,-3,-3,-3,-3,-3,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-1,-1,-1,
-1,-1,-1,-1,-1,-1,-1,-1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,
2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3}
Jx.Jx + Jy.Jy + Jz.Jz //Eigenvalues //Sort
{30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,
30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,
30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30}
Jz //Eigenvalues //Sort
{-5,-5,-5,-5,-5,-5,-5,-4,-4,-4,-4,-4,-4,-4,-3,-3,-3,-3,-3,-3,-3,-2,-2,-2,-2,
-2,-2,-2,-1,-1,-1,-1,-1,-1,-1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,
3,3,3,3,4,4,4,4,4,4,4,5,5,5,5,5,5,5}
Fx.Fx + Fy.Fy + Fz.Fz //Eigenvalues //Sort
{6,6,6,6,6,12,12,12,12,12,12,12,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,
30,30,30,30,30,42,42,42,42,42,42,42,42,42,42,42,42,42,56,56,56,56,56,56,56,56,
56,56,56,56,56,56,56,72,72,72,72,72,72,72,72,72,72,72,72,72,72,72,72,72}
Fz //Eigenvalues //Sort
{-8,-7,-7,-6,-6,-6,-5,-5,-5,-5,-4,-4,-4,-4,-4,-3,-3,-3,-3,-3,-3,-2,-2,-2,-2,
-2,-2,-2,-1,-1,-1,-1,-1,-1,-1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,
3,3,3,4,4,4,4,4,5,5,5,5,6,6,6,7,7,8}
Solutions to Exercises 169

Q3.7 The Clebsch–Gordan coefficients I, M I , J, M J |I, J, F, M F  serve to con-


struct the states that simultaneously diagonalize Iˆ2 , Jˆ2 , F̂ 2 , and F̂z from those
that simultaneously diagonalize Iˆ2 , Iˆz , Jˆ2 , and Jˆz : for |I − J | ≤ F ≤ I + J and
|M F | ≤ F,


I 
J
|I, J, F, M F  = I, M I , J, M J |I, J, F, M F |I, M I  ⊗ |J, M J  (9)
M I =−I M J =−J

In Mathematica, S[i,j,F,MF] = |I, J, F, M F :


S[i_,j_,F_,MF_] := Sum[ClebschGordan[{i,Mi},{j,Mj},{F,MF}]*
Flatten[KroneckerProduct[SparseArray[i-Mi+1->1, 2i+1],
SparseArray[j-Mj+1->1, 2j+1]]],
{Mi,-i,i}, {Mj,-j,j}]

Check that these diagonalize Iˆ2 , Jˆ2 , F̂ 2 , and F̂z simultaneously:


With[{i=3, j=5},
Table[(Ix.Ix+Iy.Iy+Iz.Iz).S[i,j,F,MF] == i(i+1)*S[i,j,F,MF] &&
(Jx.Jx+Jy.Jy+Jz.Jz).S[i,j,F,MF] == j(j+1)*S[i,j,F,MF] &&
(Fx.Fx+Fy.Fy+Fz.Fz).S[i,j,F,MF] == F(F+1)*S[i,j,F,MF] &&
Fz.S[i,j,F,MF] == MF*S[i,j,F,MF],
{F,Abs[i-j],i+j}, {MF,-F,F}]

(disregard the warnings about ClebschGordan::phy).


When we use the basis of product states |I, M I  ⊗ |J, M J  to calculate the
eigenvectors of the matrices Fx.Fx+Fy.Fy+Fz.Fz and Fz, using either
Eigenvectors or Eigensystem, these Clebsch–Gordan coefficients appear
naturally as coefficients of the resulting eigenvectors.
Q3.8
With[{S = 100},
\[Psi] = SparseArray[1->1, 2S+1];
{x,y,z, xx,yy,zz} = Conjugate[\[Psi]].(#.\[Psi])& /@
{sx[S],sy[S],sz[S], sx[S].sx[S],sy[S].sy[S],sz[S].sz[S]};
{{x,y,z}, {xx-xˆ2,yy-yˆ2,zz-zˆ2}}]
{{0, 0, 100}, {50, 50, 0}}

In general,

 Ŝx  = 0  Ŝ y  = 0  Ŝz  = S
 Ŝx2  = S/2  Ŝ y2  = S/2  Ŝz2  = S 2
2 2 2
 Ŝx2  −  Ŝx  = S/2  Ŝ y2  −  Ŝ y  = S/2  Ŝz2  −  Ŝz  = 0 (10)

Q3.9
{\[CapitalDelta], \[CapitalOmega]} /.
{Ei -> eval[[2]], Ej -> eval[[7]], Tij -> T[[2,7]], Tji -> T[[7,2]]} /.
hfc /. {Bz->3.22895,Bacx->0.1,Bacy->0,Bacz->0,\[Omega]->2\[Pi]*6827.9}
{0.00476766, 0.762616}
170 Solutions to Exercises

The oscillation period is 2π/ = 8.238 99 µs, which matches the full oscillation
periods of the plots of In[321] and In[322].
Q3.10 23 Na has the same nuclear spin I = 3/2 as 87
Rb; they differ only in the
constants:
• Ahfs = 885.813 064 40 MHz
• g I = 0.000 804 610 80
• g L = −0.999 976 13
As a result, there is a magic field between the same states as for 87 Rb, but at a field
strength Bz = 0.676 851 G ≈ 3μ 16Ag I
g2
.
B S

Q3.11 85
Rb has a nuclear spin I = 5/2, which means that we must re-define the
spin operators; all operators are now 12 × 12 matrices. Further, the constants to be
used are
• Ahfs = 1.011 910 813 0 GHz
• g I = 0.000 293 640 00
• g L = −0.999 993 54
There are two magic fields:
• Bz = 0.357 312 G ≈ 4μ
27Ag I
2 : the energy difference between |F = 2, M F = −1
B gS
and |F = 3, M F = 1 is stationary.
• Bz = 1.143 42 G ≈ 108Ag
5μB g 2S
I
: the energy difference between |F = 2, M F = −2
and |F = 3, M F = 2 is stationary.

Q3.12 133 Cs has a nuclear spin I = 7/2, which means that we must re-define the
spin operators; all operators are now 16 × 16 matrices. Further, the constants to be
used are
• Ahfs = 2.298 157 942 5 GHz
• g I = 0.000 398 853 95
• g L = −0.999 995 87
There are three magic fields:
• Bz = 1.393 34 G ≈ 15μ
128Ag I
2 : the energy difference between |F = 3, M F = −1
B gS
and |F = 4, M F = 1 is stationary.
• Bz = 3.483 38 G ≈ 3μ64 Ag I
2 : the energy difference between |F = 3, M F = −2
B gS
and |F = 4, M F = 2 is stationary.
• Bz = 8.9572 G ≈ 384 Ag I
7μB g 2S
: the energy difference between |F = 3, M F = −3 and
|F = 4, M F = 3 is stationary.
Solutions to Exercises 171

Q3.13 In the result from


Assuming[A>0, FullSimplify[T/.{Bx->0,By->0,Bz->0,Bacx->B,Bacy->I*B,Bacz->0}]]

we can see that the transitions 1 ↔ 5, 1 ↔ 6, 3 ↔ 7, 3 ↔ 8, 4 ↔ 7, 4 ↔ 8 are


allowed. Using Eq. (3.9) we identify these transitions as |2, 2 ↔ |1, 1, |2, 2 ↔
|2, 1, |1, 0 ↔ |1, −1, |1, 0 ↔ |2, −1, |2, 0 ↔ |1, −1, |2, 0 ↔ |2, −1. These
transitions are all M F = ±1.
Q3.14 In the result from
Assuming[A>0, FullSimplify[T/.{Bx->0,By->0,Bz->0,Bacx->0,Bacy->0,Bacz->B}]]

we can see that the transitions 3 ↔ 4, 5 ↔ 6, 7 ↔ 8 are allowed. Using Eq. (3.9)
we identify these transitions as |1, 0 ↔ |2, 0, |1, 1 ↔ |2, 1, |1, −1 ↔ |2, −1.
These transitions are all M F = 0. Further, the energy of levels 1, 2, 5, 6, 7, 8 (i.e.,
all levels with M F = 0) will be shifted by the non-zero diagonal elements of T.
Q3.15 On a MacBook Pro (Retina, 13-inch, Early 2015) with a 3.1 GHz Intel
Core i7 CPU and 16 GB 1867 MHz DDR3 RAM, it takes around 20 min
(AbsoluteTiming[\[Gamma]=gs[1,1];]) to calculate the ground state gs[1,1]
with the definition of In[350] and N = 22. This calculation uses over 24 GB
of compressed RAM (MaxMemoryUsed[]) and is the upper limit on N for this
computer. For N = 23 the calculation runs out of memory.
Q3.16 With the Mathematica code of Sect. 3.4, setting S = 1 and N = 12, we find
a phase transition around b = ±2. Similar to the S = 1/2 case, the Ising model is
gapless for |b| < 2 and gapped for |b| > 2. The correlations look qualitatively similar
to the ones found for S = 1/2.
Q3.17
1. For b → ±∞ the ground states are analogous to those of the transverse Ising
model, Eq. (3.28), along the ±z axis:

|ψ+∞  = |+z⊗N , |ψ−∞  = |−z⊗N . (11)

Notice that, unlike the transverse Ising model, these asymptotic ground states are
the exact ground states for |b| > 2, not just in the limits b → ±∞.
2. There are phase transitions at b = ±2, recognizable in the ground-state gap.
3. Since the states of Eq. (11) are product states, there are absolutely no correlations
between the states of the spins for |b| > 2. For |b| < 2 the magnetization, spin–
spin correlations, and entanglement entropy are qualitatively similar to those of
the transverse Ising model. For b = 0 the spin–spin correlations do not reach the
full uniform 0.25 as for the Ising model, but rather they still decay with distance.
Q3.18
1. For b → ±∞ the ground states are the same as Eq. (11).
2. At b = 0 the ground-state degeneracy is N + 1.
172 Solutions to Exercises

3. For any b > 0, |ψ+∞  is the exact ground state; for any b < 0, |ψ−∞  is the exact
ground state. There is a phase transition at b = 0.
4. Since the states of Eq. (11) are product states, there are absolutely no correlations
between the states of the spins for any b = 0.
Q3.19
1. ρ̂ AB = |ψψ| = |↑↑↑↑|:
\[Psi] = Flatten[KroneckerProduct[{1,0}, {1,0}]]
{1, 0, 0, 0}
\[Rho]AB = KroneckerProduct[\[Psi], Conjugate[\[Psi]]]
{{1,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,0}}

2. ρ̂ A = Tr B ρ̂ AB = |↑↑| is a pure state:


\[Rho]A = traceout[\[Rho]AB, -2]
{{1,0}, {0,0}}
Tr[\[Rho]A.\[Rho]A]
1

3. ρ̂ B = Tr A ρ̂ AB = |↑↑| is a pure state:


\[Rho]B = traceout[\[Rho]AB, 2]
{{1,0}, {0,0}}
Tr[\[Rho]B.\[Rho]B]
1

4. Using In[353] and In[354], we see that the entropy of entanglement is


S A − S AB = S B − S AB = 0 (no entanglement):
SAB = Total[s /@ Eigenvalues[\[Rho]AB]]
0
SA = Total[s /@ Eigenvalues[\[Rho]A]]
0
SB = Total[s /@ Eigenvalues[\[Rho]B]]
0

Q3.20
|↑↓−|↓↑ ↑↓|−↓↑|
1. ρ̂ AB = |ψψ| = √
2

2
= 21 (|↑↓↑↓| − |↑↓↓↑| − |↓↑↑↓|
+ |↓↑↓↑|):
\[Psi] = Flatten[KroneckerProduct[{1,0}, {0,1}]
- KroneckerProduct[{0,1}, {1,0}]]/Sqrt[2]
{0, 1/Sqrt[2], -1/Sqrt[2], 0}
\[Rho]AB = KroneckerProduct[\[Psi], Conjugate[\[Psi]]]
{{0,0,0,0}, {0,1/2,-1/2,0}, {0,-1/2,1/2,0}, {0,0,0,0}}

2. ρ̂ A = Tr B ρ̂ AB = 21 (|↑↑| + |↓↓|) is a mixed state:


\[Rho]A = traceout[\[Rho]AB, -2]
{{1/2,0}, {0,1/2}}
Tr[\[Rho]A.\[Rho]A]
1/2
Solutions to Exercises 173

3. ρ̂ B = Tr A ρ̂ AB = 21 (|↑↑| + |↓↓|) is a mixed state:


\[Rho]B = traceout[\[Rho]AB, 2]
{{1/2,0}, {0,1/2}}
Tr[\[Rho]B.\[Rho]B]
1/2

4. Using In[353] and In[354], we see that the entropy of entanglement is


S A − S AB = S B − S AB = 1 (maximal entanglement):
SAB = Total[s /@ Eigenvalues[\[Rho]AB]]
0
SA = Total[s /@ Eigenvalues[\[Rho]A]]
1
SB = Total[s /@ Eigenvalues[\[Rho]B]]
1

Q3.21 Don’t forget to complex-conjugate \[Psi]out on the left, for generality:


\[Psi]out = {1/Sqrt[2], 0, 0, 1/Sqrt[2]};
Table[Conjugate[\[Psi]out].(KroneckerProduct[PauliMatrix[i],PauliMatrix[j]].\[Psi]out),
{i,0,3}, {j,0,3}]
{{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, -1, 0}, {0, 0, 0, 1}}

We find ψout |1 ⊗ 1|ψout  = 1 (normalization), ψout |σ̂x ⊗ σ̂x |ψout  = 1, ψout |σ̂ y ⊗
σ̂ y |ψout  = −1, ψout |σ̂z ⊗ σ̂z |ψout  = 1, and all others equal to zero. The density
matrix is therefore
1 
ρ̂ = 1 ⊗ 1 + σ̂x ⊗ σ̂x − σ̂ y ⊗ σ̂ y + σ̂z ⊗ σ̂z
4
1 |00 + |11 00| + 11|
= (|0000| + |0011| + |1100| + |1111|) = √ √
2 2 2
(12)

as expected.
Q3.22 Replace In[390] and In[391] with
u = {1,1}/Sqrt[2];
U[\[Phi]_] = Exp[2\[Pi]*I*\[Phi]] * {{1,0},{0,1}};

and re-evaluate the attached Mathematica notebook. All results remain unchanged.
Q3.23 Replace In[390] and In[391] with
u = {1,1}/Sqrt[2];
U[\[Phi]_] = {{Exp[2\[Pi]*I*\[Phi]], 0}, {0, Exp[4\[Pi]*I*\[Phi]]}};

and re-evaluate the attached Mathematica notebook. The probabilities for the differ-
ent estimates of ϕ show both frequencies simultaneously, and there is no cross-talk
between them:
ListDensityPlot[Transpose[Table[prob[\[Phi]], {\[Phi],0,1,1/256}]],
PlotRange->All, DataRange->{{0,1},{0,1-2ˆ-t}},
FrameLabel->{"setting \[Phi]","estimated \[Phi]"}]
174 Solutions to Exercises

Chapter 4 Quantum Motion in Real Space

Q4.1 Starting with the Schrödinger equation


   ∞ 
2 ∞ d2
− dx|x 2 x| + dx|xV (x)x| |ψ = E|ψ, (13)
2m −∞ dx −∞

we (i) leave away the bracket and (ii) multiply by y| from the left (y ∈ R):
 ∞  ∞
2 d2
− dxy|x 2 x|ψ + dxy|xV (x)x|ψ = Ey|ψ (14)
2m −∞ dx −∞

Remembering that y|x = δ(x − y) and x|ψ = ψ(x):


 ∞  ∞
2
− dxδ(x − y)ψ (x) + dxδ(x − y)V (x)ψ(x) = Eψ(y) (15)
2m −∞ −∞

Simplify the integrals with the Dirac δ-functions:

2
− ψ (y) + V (y)ψ(y) = Eψ(y) (16)
2m
Since this is valid for any y ∈ R, it concludes the proof.
Solutions to Exercises 175

Q4.2
 ∞   ∞ 
ψ|χ  = dxψ ∗ (x)x| dyχ (y)|y
−∞ −∞
 ∞

= dxdyψ (x)χ (y)x|y
−∞
 ∞
= dxdyψ ∗ (x)χ (y)δ(x − y)
−∞
 ∞
= dxψ ∗ (x)χ (x) (17)
−∞

Q4.3
a = m = \[HBar] = 1; (* natural units *)
nmax = 100;
TM = SparseArray[Band[{1,1}] -> Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)];
pM = SparseArray[{n1_,n2_}/;OddQ[n1-n2]->(4*I*\[HBar]*n1*n2)/(a*(n2ˆ2-n1ˆ2)),
{nmax,nmax}];
TM //N //Eigenvalues //Sort
{4.9348, 19.7392, 44.4132, 78.9568, 123.37, 177.653, ..., 48366., 49348.}
pM.pM/(2m) //N //Eigenvalues //Sort
{4.8183, 4.8183, 43.3646, 43.3646, 120.457, 120.457, ..., 47257.4, 47257.4}

The eigenvalues of T̂ are quadratically spaced, whereas those of p̂ 2 /(2m) come in


degenerate pairs (one involving only states of even n and one only states of odd n)
and thus never converge to the eigenvalues of T̂ , even in the limit n max → ∞.
Q4.4 We use the more accurate form of the position operator from In[434]:
a = m = \[HBar] = 1; (* natural units *)
nmax = 20;
xM = SparseArray[{
Band[{1,1}] -> a/2,
{n1_,n2_} /; OddQ[n1-n2] -> -8*a*n1*n2/(\[Pi]ˆ2*(n1ˆ2-n2ˆ2)ˆ2)},
{nmax,nmax}];
pM = SparseArray[{n1_,n2_}/;OddQ[n1-n2]->4*I*\[HBar]*n1*n2/(a*(n2ˆ2-n1ˆ2)),
{nmax,nmax}];
coM = xM.pM - pM.xM; (* commutator [x,p] in the momentum basis *)
coM/\[HBar] //N //MatrixForm

In the upper-left corner (low values of n) the result looks like the unit matrix multiplied
by the imaginary unit i; but towards the lower-right corner (large values of n) it
deviates dramatically from the correct expression. This is to be expected from the
problematic nature of the momentum operator; see Sect. 4.1.6.
Q4.5 The exact probability is about 37.1%:
Integrate[\[Psi][1,x]ˆ2, {x, 0, 1}] //N
0.37087

Using In[460], the first numerical method gives a good approximation of 37.0%:

Integrate[\[Psi]0[x]ˆ2, {x, 0, 1}]


0.369801
176 Solutions to Exercises

Using In[477], the second numerical method gives an approximation of 36.2%:


Integrate[\[Psi]0[x]ˆ2, {x, 0, 1}]
0.362126

Alternatively, we set up an interpolating function from the data of In[474], and


integrate it numerically. The result depends on the interpolation order: higher-order
interpolations tend to yield more accurate results.
\[Psi]0i1 = Interpolation[\[Gamma], InterpolationOrder -> 1];
NIntegrate[\[Psi]0i1[x]ˆ2, {x, 0, 1}]
0.302899
\[Psi]0i2 = Interpolation[\[Gamma], InterpolationOrder -> 2];
NIntegrate[\[Psi]0i2[x]ˆ2, {x, 0, 1}]
0.358003
\[Psi]0i3 = Interpolation[\[Gamma], InterpolationOrder -> 3];
NIntegrate[\[Psi]0i3[x]ˆ2, {x, 0, 1}]
0.3812

Q4.6 From Eq. (4.31) the average height in state k is


 ∞  1/3
42
k|x̂|k = dx|ψk (x)|2 x = −αk · , (18)
0 27m 2 g

which you can verify with In[448] and


Assuming[m>0&&g>0&&\[HBar]>0, Table[Integrate[\[Psi][k,x]ˆ2*x, {x, 0, \[Infinity]}], {k, 1, 5}]]

For a neutron in earth’s gravitational field this gives an average height of about 9
µm:
With[{k = 1,
m = Quantity["NeutronMass"],
g = Quantity["StandardAccelerationOfGravity"],
\[HBar] = Quantity["ReducedPlanckConstant"]},
UnitConvert[-AiryAiZero[k]*(4*\[HBar]ˆ2/(27*mˆ2*g))ˆ(1/3), "Micrometers"]]
9.147654 \[Mu]m

Q4.7 The exact energy levels are E n = ω(n + 1/2) with n ∈ N0 .


In the given unit system, the mass is m = 1, Planck’s constant is \[HBar] = 1, and
the angular frequency is \[Omega] = 1/\[HBar] = 1.
We set up a calculation in the position basis with the mixed-basis numerical
method:
a = 10; (* calculation box size *)
m = \[HBar] = \[Omega] = 1; (* natural units *)
nmax = 100;
\[CapitalDelta] = a/(nmax+1); (* grid spacing *)
xgrid = Range[nmax]*\[CapitalDelta]; (* the computational grid *)
TP = FourierDST[SparseArray[Band[{1,1}]->Range[nmax]ˆ2*\[Pi]ˆ2*\[HBar]ˆ2/(2*m*aˆ2)], 1];
W[x_] = m*\[Omega]ˆ2*(x-a/2)ˆ2/2; (* the potential function, centered *)
Wgrid = Map[W, xgrid]; (* the potential on the computational grid *)
VP = SparseArray[Band[{1,1}] -> Wgrid];
HP = TP + VP;
Solutions to Exercises 177

We find the energy eigenvalues (in units of E = ω) with


Eigenvalues[HP] //Sort
{0.5, 1.5, 2.5, 3.5, 4.50001, 5.5001, 6.5006, 7.50293, 8.51147, 9.53657, ...}

and see that at least the lowest eigenvalues match the analytic expression. Using a
larger value of n max will give more accurate eigenstates and eigenvalues.
Q4.8 The excited-state Wigner distribution has a significant negative region around
its center:
gsP = Transpose[Sort[Transpose[-Eigensystem[-N[HP], 2,
Method->{"Arnoldi", "Criteria"->"RealPart", MaxIterations->10ˆ6}]]]];
WignerDistributionPlot[gsP[[2, 2]], {0, a}]

Q4.9
X = RandomReal[{0,1}, {2,2}]
{{0.580888, 0.80848}, {0.218175, 0.979598}}
Y = RandomReal[{0,1}, {2,2}]
{{0.448364, 0.774595}, {0.490198, 0.310169}}
X.Y - Y.X
{{0.227318, -0.420567}, {0.225597, -0.227318}}
MatrixExp[X + Y]
{{4.68326, 6.06108}, {2.71213, 5.68068}}
MatrixExp[X].MatrixExp[Y]
{{5.0593, 5.38209}, {3.10936, 5.31705}}

Q4.10 We use the split-step propagation code of Sect. 4.1.9 with the potential
a = m = \[HBar] = 1;
With[{W0 = 0 * \[HBar]ˆ2/(m*aˆ2)},
W[x_] = W0*Sin[10*\[Pi]*x/a];]

and the initial wavefunction


With[{x0=a/2, \[Sigma]=0.05*a, k=100/a},
v0=Normalize[Function[x, Eˆ(-((x-x0)ˆ2/(4*\[Sigma]ˆ2)))*Eˆ(I*k*x)] /@ xgrid];]

For W0 = 0 the Gaussian wavepacket bounces back and forth between the simulation
boundaries and disperses slowly; the self-interference at the reflection points is clearly
visible:
178 Solutions to Exercises

 2
For W0 = 5000 ma 2 the Gaussian wavepacket remains mostly trapped:

Q4.11
1. W (x) is a double-well potential with minima at x = 1
2
± δ and a barrier height
of :
W[{\[CapitalOmega]_, \[Delta]_}, x_] = \[CapitalOmega]*(((x-1/2)/\[Delta])ˆ2-1)ˆ2;
Table[W[{\[CapitalOmega],\[Delta]},x], {x,1/2-\[Delta],1/2+\[Delta],\[Delta]}]
{0, \[CapitalOmega], 0}
Table[D[W[{\[CapitalOmega],\[Delta]},y],y] /. y->x, {x,1/2-\[Delta],1/2+\[Delta],\[Delta]}]
{0, 0, 0}
Plot[W[{1, 1/4}, x], {x, 0, 1}]

2. As in Q4.10, we use the split-step propagation code of Sect. 4.1.9 with the
potential
With[{\[CapitalOmega] = 250, \[Delta] = 1/4},
W[x_] = W[{\[CapitalOmega], \[Delta]}, x];]
Solutions to Exercises 179

With the initial state from In[879] (Q4.10) with x0 = 0.2694, \[Sigma] = 0.0554,
k = 0, the time-dependent density is seen to oscillate between the wells:
With[{\[CapitalDelta]t = 20, M = 10ˆ4},
V = propApprox[\[CapitalDelta]t, M, v0];]
\[Rho] = ArrayPad[(nmax+1)*Abs[#[[2]]]ˆ2& /@ V, {{0,0},{1,1}}];
ArrayPlot[Reverse[Transpose[\[Rho]]]]

This oscillation is apparent in the left/right probabilities:


ListLinePlot[{{#[[1]],Norm[#[[2,;;(nmax/2)]]]ˆ2}& /@ V,
{#[[1]],Norm[#[[2,nmax/2+1;;]]]ˆ2}& /@ V}]

3. Now we use In[500] and observe that the attractive interactions prevent the
particle from tunneling between the wells:
With[{\[Kappa] = 0.5, \[CapitalDelta]t = 20, M = 10ˆ4},
V = propApprox[W[#1]&, \[Kappa], \[CapitalDelta]t, M, v0];]
\[Rho] = ArrayPad[(nmax+1)*Abs[#[[2]]]ˆ2& /@ V, {{0,0},{1,1}}];
ArrayPlot[Reverse[Transpose[\[Rho]]]]
180 Solutions to Exercises

ListLinePlot[{{#[[1]],Norm[#[[2,;;(nmax/2)]]]ˆ2}& /@ V,
{#[[1]],Norm[#[[2,nmax/2+1;;]]]ˆ2}& /@ V}]

Q4.12
1. We do this calculation in Mathematica, for the more general potential W (x) =
1
2
k(x − 21 )2 :
\[Zeta][x_] = Eˆ(-((x-1/2)/(2*\[Sigma]))ˆ2)/Sqrt[\[Sigma]*Sqrt[2*\[Pi]]];
Assuming[\[Sigma]>0, Integrate[\[Zeta][x]ˆ2, {x, -\[Infinity], \[Infinity]}]]
1
Assuming[\[Sigma]>0,
e = Integrate[\[Zeta][x]*(-1/2*\[Zeta]”[x] + 1/2*k*(x-1/2)ˆ2*\[Zeta][x]), {x, -\[Infinity], \[Infinity]}]]
(1+4*k*\[Sigma]ˆ4)/(8*\[Sigma]ˆ2)
Solve[D[e, \[Sigma]] == 0, \[Sigma]]
{{\[Sigma] -> -1/(Sqrt[2]*kˆ(1/4))}, {\[Sigma] -> -I/(Sqrt[2]*kˆ(1/4))},
{\[Sigma] -> I/(Sqrt[2]*kˆ(1/4))}, {\[Sigma] -> 1/(Sqrt[2]*kˆ(1/4))}}

Of these four solutions, we choose σ = (4k)−1/4 because it is real and positive:


e /. \[Sigma] -> (4*k)ˆ(-1/4)
Sqrt[k]/2

For k = 5000, the ground √ state is therefore ζ (x) with σ = 20 000−1/4 ≈


0.084 089 6 and energy E = 5000/2 ≈ 35.3553.
2. We use the same code as in Sect. 4.2.1 but with the potential
With[{k = 5000},
W[x_] = 1/2*k*(x-1/2)ˆ2;]
Solutions to Exercises 181

Further, we use nmax = 1000 to describe the wavefunction with strongly attrac-
tive interactions better. The result matches the Gaussian
√ approximation: both the
energy and the chemical potential are approximately k/2,
groundstate[10ˆ-4, 0][[;;2]]
{35.3553, 35.3553}

3. Ground-state density for repulsive interactions:


With[{\[Kappa] = 100, \[Delta]\[Beta] = 10ˆ-4},
{Etot, \[Mu], \[Gamma]} = groundstate[\[Delta]\[Beta], \[Kappa]];
ListLinePlot[Join[{{0, 0}}, Transpose[{xgrid,Abs[\[Gamma]]ˆ2/\[CapitalDelta]}], {{a, 0}}],
PlotRange -> All, PlotLabel -> {Etot, \[Mu]}]]

Ground-state density for no interactions:


With[{\[Kappa] = 0, \[Delta]\[Beta] = 10ˆ-4},
{Etot, \[Mu], \[Gamma]} = groundstate[\[Delta]\[Beta], \[Kappa]];
ListLinePlot[Join[{{0, 0}}, Transpose[{xgrid,Abs[\[Gamma]]ˆ2/\[CapitalDelta]}], {{a, 0}}],
PlotRange -> All, PlotLabel -> {Etot, \[Mu]}]]

Ground-state density for attractive interactions:


With[{\[Kappa] = -100, \[Delta]\[Beta] = 10ˆ-4},
{Etot, \[Mu], \[Gamma]} = groundstate[\[Delta]\[Beta], \[Kappa]];
ListLinePlot[Join[{{0, 0}}, Transpose[{xgrid,Abs[\[Gamma]]ˆ2/\[CapitalDelta]}], {{a, 0}}],
PlotRange -> All, PlotLabel -> {Etot, \[Mu]}]]
182 Solutions to Exercises

4. The energy and chemical potential differ for κ = 0:


With[{\[Delta]\[Beta] = 10ˆ-4},
ListLinePlot[Transpose[Table[{{\[Kappa],groundstate[\[Delta]\[Beta],\[Kappa]][[1]]},
{\[Kappa],groundstate[\[Delta]\[Beta],\[Kappa]][[2]]}}, {\[Kappa], -100, 100, 10}]]]]

Q4.13 We do this calculation for a = 1; the prefactor a −1 of the right-hand side of


Eq. (4.65) can be found by a variable substitution x → ax . From the momentum
basis functions
\[Phi][n_, x_] = Sqrt[2]*Sin[n*\[Pi]*x];

we define the position basis functions


\[Theta][nmax_, j_, x_] := 1/Sqrt[nmax+1]*Sum[\[Phi][n,j/(nmax+1)]*\[Phi][n,x], {n,nmax}]

The exact overlap integrals are


J[nmax_, {j1_,j2_,j3_,j4_}] :=
Integrate[\[Theta][nmax,j1,x]*\[Theta][nmax,j2,x]*\[Theta][nmax,j3,x]*\[Theta][nmax,j4,x], {x,0,1}]

We make a table of overlap integrals, calculated both exactly and approximately


through In[504], and show that the difference is zero (up to numerical inaccura-
cies):
Solutions to Exercises 183

With[{nmax = 3},
A = Table[J[nmax, {j1,j2,j3,j4}], {j1,nmax},{j2,nmax},{j3,nmax},{j4,nmax}];
B = FourierDST[Table[KroneckerDelta[n1+n2,n3+n4]
+KroneckerDelta[n1+n3,n2+n4]+KroneckerDelta[n1+n4,n2+n3]
-KroneckerDelta[n1,n2+n3+n4]-KroneckerDelta[n2,n1+n3+n4]
-KroneckerDelta[n3,n1+n2+n4]-KroneckerDelta[n4,n1+n2+n3],
{n1,nmax}, {n2,nmax}, {n3,nmax}, {n4,nmax}],1]/2;]
A - B //Abs //Max
8.88178*10ˆ-16

Q4.14 We define memoizing functions that calculate x̂1 − x̂2  and (x̂1 − x̂2 )2 
with
Clear[\[CapitalDelta]1,\[CapitalDelta]2];
\[CapitalDelta]1[\[Kappa]_?NumericQ] := \[CapitalDelta]1[\[Kappa]] =
With[{\[Gamma]=gs[0,\[Kappa],1][[2,1]]}, Re[Conjugate[\[Gamma]].((x1-x2).\[Gamma])]]
\[CapitalDelta]2[\[Kappa]_?NumericQ] := \[CapitalDelta]2[\[Kappa]] =
With[{\[Gamma]=gs[0,\[Kappa],1][[2,1]]}, Re[Conjugate[\[Gamma]].((x1-x2).(x1-x2).\[Gamma])]]

The mean distance in the ground state is zero for symmetry reasons: (notice the
numerical inaccuracies)
ListLinePlot[Table[{\[Kappa], \[CapitalDelta]1[\[Kappa]]}, {\[Kappa], -25, 25, 1}]]

The variance of the distance in the ground state increases with κ:


ListLinePlot[Table[{\[Kappa], \[CapitalDelta]2[\[Kappa]]-\[CapitalDelta]1[\[Kappa]]ˆ2}, {\[Kappa], -25, 25, 1}]]
184 Solutions to Exercises

Q4.15 We show that all three terms of the Hamiltonian commute with the particle
interchange operator:
\[CapitalXi].TP - TP.\[CapitalXi] //Norm
0.
\[CapitalXi].VP - VP.\[CapitalXi] //Norm
0
\[CapitalXi].HintP - HintP.\[CapitalXi] //Norm
1.15903*10ˆ-14

Q4.16
D[Normal[HPa[\[CapitalOmega], \[Kappa]]], \[Kappa]] //Abs //Max
1.03528*10ˆ-15

Q4.17 We define the wavefunctions ψ1 (x1 , x2 ) for x1 < x2 and ψ2 (x1 , x2 ) for
x1 > x2 :
\[Psi]1[x1_,x2_] = A*(Cos[\[Alpha]*(x1+x2-1)]*Cos[\[Beta]*(x1-x2+1)]
-Cos[\[Alpha]*(x1-x2+1)]*Cos[\[Beta]*(x1+x2-1)]);
\[Psi]2[x1_,x2_] = \[Psi]1[x2,x1];

1. Check the boundary conditions ψ(x1 , 0) = ψ(x1 , 1) = ψ(0, x2 ) = ψ(1, x2 )


= 0:
{\[Psi]2[x1,0], \[Psi]1[x1,1], \[Psi]1[0,x2], \[Psi]2[1,x2]} //FullSimplify
{0, 0, 0, 0}

Check that the two pieces match up for x1 = x2 :


\[Psi]1[x,x] == \[Psi]2[x,x]
True

Check the symmetries of the wavefunction:


\[Psi]1[x1,x2] == \[Psi]2[x2,x1] == \[Psi]2[1-x1,1-x2] //FullSimplify
True

2. Check that the two pieces of the wavefunction satisfy the Schrödinger equation
whenever x1 = x2 , with energy value E = α 2 + β 2 :
-1/2*D[\[Psi]1[x1,x2],{x1,2}]+D[\[Psi]1[x1,x2],{x2,2}] ==
(\[Alpha]ˆ2+\[Beta]ˆ2)*\[Psi]1[x1,x2] //FullSimplify
True
-1/2*D[\[Psi]2[x1,x2],{x1,2}]+D[\[Psi]2[x1,x2],{x2,2}] ==
(\[Alpha]ˆ2+\[Beta]ˆ2)*\[Psi]2[x1,x2] //FullSimplify
True

3. The transformed Hamiltonian is


   
1 ∂2 1 ∂2 κ
Ĥ = − + − + √ δ(r ) (19)
2 ∂ R2 2 ∂r 2 2
Solutions to Exercises 185

and the transformed wavefunctions are


\[Psi]1[(R+r)/Sqrt[2],(R-r)/Sqrt[2]] //FullSimplify
A*(Cos[\[Alpha]*(R*Sqrt[2]-1)]*Cos[\[Beta]*(r*Sqrt[2]+1)]
-Cos[\[Beta]*(R*Sqrt[2]-1)]*Cos[\[Alpha]*(r*Sqrt[2]+1)])
\[Psi]2[(R+r)/Sqrt[2],(R-r)/Sqrt[2]] //FullSimplify
A*(Cos[\[Alpha]*(R*Sqrt[2]-1)]*Cos[\[Beta]*(r*Sqrt[2]-1)]
-Cos[\[Beta]*(R*Sqrt[2]-1)]*Cos[\[Alpha]*(r*Sqrt[2]-1)])

with 
ψ1 (R, r ) if r < 0
ψ(R, r ) = (20)
ψ2 (R, r ) if r > 0

4. The Schrödinger equation in (R, r ) coordinates is


   
1 ∂2 1 ∂2 κ
− ψ(R, r ) + − + √ δ(r ) ψ(R, r ) = (α 2 + β 2 )ψ(R, r )
2 ∂ R2 2 ∂r 2 2
(21)
We integrate Eq. (21) over r ∈ [−, ]:

     
1  ∂ 2 ψ(R, r ) 1  ∂ 2 ψ(R, r ) κ
− dr − dr +√ dr δ(r )ψ(R, r ) = (α 2 + β 2 ) dr ψ(R, r ) (22)
2 − ∂R 2 2 − ∂r 2 2 − −

Using partial integration on the second term of the left-hand side:


      
1  ∂ 2 ψ(R, r ) 1 ∂ψ(R, r )  ∂ψ(R, r )  κ
− dr −  −  + √ ψ(R, 0) = (α 2 + β 2 ) dr ψ(R, r )
2 − ∂R 2 2 ∂r r = ∂r r =− 2 −
(23)
In the limit  → 0+ this equation becomes
   
1 ∂ψ2 (R, r )  ∂ψ1 (R, r )  κ
−  −  + √ ψ(R, 0) = 0 (24)
2 ∂r r =0 ∂r r =0 2

Inserting the definitions of ψ1 and ψ2 :


-1/2*((D[\[Psi]2[(R+r)/Sqrt[2],(R-r)/Sqrt[2]],r]/.r->0)
- (D[\[Psi]1[(R+r)/Sqrt[2],(R-r)/Sqrt[2]],r]/.r->0))
+ \[Kappa]/Sqrt[2]*\[Psi]1[R/Sqrt[2],R/Sqrt[2]] //FullSimplify
A*(Cos[\[Beta]*(R*Sqrt[2]-1)]*(2\[Alpha]*Sin[\[Alpha]]-\[Kappa]*Cos[\[Alpha]])
-Cos[\[Alpha]*(R*Sqrt[2]-1)]*(2\[Beta]*Sin[\[Beta]]-\[Kappa]*Cos[\[Beta]]))/Sqrt[2]


The only way that this expression can be zero for all values of R ∈ [0, 2]
is if 2α sin(α) − κ cos(α) = 2β sin(β) − κ cos(β) = 0, and hence if α tan(α) =
β tan(β) = κ/2.
5. See the attached Mathematica notebook ContactInteraction.nb.

Q4.18 We solve this problem with the code of Sect. 4.3.1, in the same way as Q4.14.
The interaction potential is, according to Eq. (4.71),
186 Solutions to Exercises

With[{\[Delta]=\[CapitalDelta]},
Q[x_] = Piecewise[{{1/Abs[x],Abs[x]>\[Delta]}, {1/\[Delta],Abs[x]<=\[Delta]}}];]

and the interaction Hamiltonian Ĥint ≈ κ/|x| is approximately


HintP = SparseArray[{j1_,j1_,j2_,j2_} :> Q[xgrid[[j1]]-xgrid[[j2]]],
{nmax,nmax,nmax,nmax}] //ArrayFlatten;

With these definitions, the energy levels are (with a = m = \[HBar] = 1)

We see that the lowest energy level is always symmetric under particle exchange
(colored in red); the bosonic ground state is therefore just the lowest energy level.
The expectation value x̂1 − x̂2  is zero by symmetry; its variance is

Q4.19 We see in the answer of Q4.18 that the lowest fermionic state (blue) depends
on the coupling strength κ. In the spirit of Sect. 4.3.1 we define the fermionic Hamil-
tonian with In[529] and calculate the fermionic ground state with In[533]. The
expectation values of x̂1 − x̂2 and (x̂1 − x̂2 )2 are calculated from the antisymmetric
ground state with
Clear[F\[CapitalDelta]x, F\[CapitalDelta]x2];
F\[CapitalDelta]x[\[Kappa]_?NumericQ] := F\[CapitalDelta]x[\[Kappa]] =
With[{\[Gamma]=ags[0,\[Kappa],1][[2,1]]}, Re[Conjugate[\[Gamma]].((x1-x2).\[Gamma])]]
F\[CapitalDelta]x2[\[Kappa]_?NumericQ] := F\[CapitalDelta]x2[\[Kappa]] =
With[{\[Gamma]=ags[0,\[Kappa],1][[2,1]]}, Re[Conjugate[\[Gamma]].((x1-x2).(x1-x2).\[Gamma])]]
Solutions to Exercises 187

The expectation value x̂1 − x̂2  is zero by symmetry; its variance is larger than that
for bosons:

Q4.20 The expectation values are the usual ones of the harmonic oscillator, given
by

  
x 2  = , y 2  = , z 2  = . (25)
2mωx 2mω y 2mωz

They are independent in the three Cartesian directions.


Q4.21 We calculate the integral over the density in Cartesian coordinates by inte-
grating only over the ellipsoid in which the density is nonzero:
A = Assuming[Rx>0 && Ry>0 && Rz>0,
Integrate[\[Rho]0*(1-(x/Rx)ˆ2-(y/Ry)ˆ2-(z/Rz)ˆ2),
{x, -Rx, Rx},
{y, -Ry*Sqrt[1-(x/Rx)ˆ2], Ry*Sqrt[1-(x/Rx)ˆ2]},
{z, -Rz*Sqrt[1-(x/Rx)ˆ2-(y/Ry)ˆ2], Rz*Sqrt[1-(x/Rx)ˆ2-(y/Ry)ˆ2]}]]
8/15*\[Pi]*Rx*Ry*Rz*\[Rho]0

Similarly, we calculate the integral of the density times x 2 with


B = Assuming[Rx>0 && Ry>0 && Rz>0,
Integrate[xˆ2 * \[Rho]0*(1-(x/Rx)ˆ2-(y/Ry)ˆ2-(z/Rz)ˆ2),
{x, -Rx, Rx},
{y, -Ry*Sqrt[1-(x/Rx)ˆ2], Ry*Sqrt[1-(x/Rx)ˆ2]},
{z, -Rz*Sqrt[1-(x/Rx)ˆ2-(y/Ry)ˆ2], Rz*Sqrt[1-(x/Rx)ˆ2-(y/Ry)ˆ2]}]]
8/105*\[Pi]*Rxˆ3*Ry*Rz*\[Rho]0

The expectation value x 2  is the ratio of these two integrals,


B/A
Rxˆ2/7

With the value of Rx given in Eq. (4.79a), this becomes


188 Solutions to Exercises

 2  2
1 152 as (N − 1)ω y ωz 5 1 15κ(N − 1)ω y ωz 5
x 2  = = , (26a)
7 m 2 ωx4 7 4π mωx4
  25   25
1 152 as (N − 1)ωx ωz 1 15κ(N − 1)ωx ωz
y  =
2
= , (26b)
7 m 2 ω4y 7 4π mω4y
 2  2
1 152 as (N − 1)ωx ω y 5 1 15κ(N − 1)ωx ω y 5
z  =
2
= . (26c)
7 m 2 ωz4 7 4π mωz4

We see that, in contrast to Q4.20, the expectation values of the three Cartesian
directions are not independent of each other’s trapping frequencies.
Q4.22 We plot the second moments of Eqs. (25) and (26a) as functions of the particle
number N :

The values of z 2  are equal to those of y 2  because of the cylindrical symmetry of


the problem. The crossover point where the Thomas–Fermi second moment is equal
to the noninteracting second moment is at

  
49 7ω3x 49 7ω3y 49 7ωz3
N̄ x = + 1, N̄ y = + 1, N̄ z = + 1,
60as ω y ωz 2m 60as ωx ωz 2m 60as ωx ω y 2m
(27)

indicated with vertical lines in the above plot. The noninteracting limit, Eq. (25), is
good for N  10. The Thomas–Fermi limit, Eq. (26a), is good for N  5000. Notice
that for N  3000 the numeric value of x 2  deviates from the Thomas–Fermi limit
because of the finite size of the calculation box.

Chapter 5 Combining Spatial Motion and Spin

Q5.1 The operators for these expectation values are


1. A1 = KroneckerProduct[xop,\[CapitalPi]↑ ] = KroneckerProduct[xop,ids/2+sz]
2. A2 = KroneckerProduct[xop,\[CapitalPi]↓ ] = KroneckerProduct[xop,ids/2-sz]
3. A3 = KroneckerProduct[xop,ids] = A1+A2
4. A4 = KroneckerProduct[xop,sz] = (A1-A2)/2
Solutions to Exercises 189

With these we evaluate the quantities


1. Re[Conjugate[\[Gamma]].(A1.\[Gamma])]
2. Re[Conjugate[\[Gamma]].(A2.\[Gamma])]
3. Re[Conjugate[\[Gamma]].(A3.\[Gamma])]
4. Re[Conjugate[\[Gamma]].(A4.\[Gamma])] for the mean
Re[Conjugate[\[Gamma]].(A4.A4.\[Gamma])-(Conjugate[\[Gamma]].(A4.\[Gamma]))ˆ2] for
the variance

Q5.2 The ordering of the subspaces of the Hilbert space is what matters here. We
have defined the Hilbert space to be a tensor product of the x, y, and spin degrees
of freedom, in this order. In In[629] the operators p̂x and p̂ y are distinguished by
the position in the Kronecker product in which pM appears.
Q5.3 We do the first two checks of Q3.2:

|eg| + |ge| |eg| − |ge|


1. [ Ŝx , Ŝ y ] = [ , ]
2 2i
(|eg| + |ge|)(|eg| − |ge|) − (|eg| − |ge|)(|eg| + |ge|)
=
4i
(−|ee| + |gg|) − (|ee| − |gg|)
=
4i
|gg| − |ee| |ee| − |gg|
= =i = i Ŝz (28)
2i 2
|eg| − |ge| |ee| − |gg|
[ Ŝ y , Ŝz ] = [ , ]
2i 2
(|eg| − |ge|)(|ee| − |gg|) − (|ee| − |gg|)(|eg| − |ge|)
=
4i
(−|eg| − |ge|) − (|eg| + |ge|)
=
4i
−|eg| − |ge| |eg| + |ge|
= =i = i Ŝx (29)
2i 2
|ee| − |gg| |eg| + |ge|
[ Ŝz , Ŝx ] = [ , ]
2 2
(|ee| − |gg|)(|eg| + |ge|) − (|eg| + |ge|)(|ee| − |gg|)
=
4
(|eg| − |ge|) − (−|eg| + |ge|)
=
4
|eg| − |ge| |eg| − |ge|
= =i = i Ŝ y (30)
2 2i
 2  2  2
|eg| + |ge| |eg| − |ge| |ee| − |gg|
2. Ŝx2 + Ŝ y2 + Ŝz2 = + +
2 2i 2
|ee| + |gg| |ee| + |gg| |ee| + |gg|
= + +
4 4 4
3 3
= (|gg| + |ee|) = 1 and hence S = 1/2. (31)
4 4
190 Solutions to Exercises

Q5.4 Ŝ + = Ŝx + i Ŝ y = |eg|+|ge|


2
+ i |eg|−|ge|
2i
= |eg|+|ge|
2
+ |eg|−|ge|
2
= |e
− |eg|+|ge| |eg|−|ge| |eg|+|ge| |eg|−|ge|
g|. Ŝ = Ŝx − i Ŝ y = 2
−i 2i
= 2
− 2
= |ge|. We
can see that Ŝ + is the operator that excites the atom ( Ŝ + |g = |e) and Ŝ − is the oper-
ator that deexcites the atom ( Ŝ + |e = |g).
Q5.5 [ X̂ , P̂] = X̂ P̂ − P̂ X̂ = â+
2 i 2
√↠− â−
√↠â− √↠â+
i 2
√â†
2
(â â−â â†+â†â−â†â†)−(â â+â â†−â†â−â†â†)
= 2i
= i[â, â†] = i.
−i P̂ X̂√
+i P̂ X̂ + P̂ +i[ X̂ , P̂]
= X̂ +2P̂ −1 and hence â†â
2 2 2 2
Q5.6 Cavity field: â†â = X̂√
2 2
= 2
+
1
2
= 21 P̂ 2 + 21 X̂ 2 . Coupling: Ŝ â + +
↠Ŝ − = ( Ŝx + i Ŝ y ) X̂√+i2P̂ + X̂√−i2P̂ ( Ŝx − i Ŝ y ) =
Ŝx X̂ +i Ŝx P̂+i Ŝy X̂ − Ŝy P̂+ X̂ Ŝx −i X̂ Ŝy −i P̂ Ŝx − P̂ Ŝy
√ . Since the operators on the field and atom
2
degrees of freedom commute (for example, [ X̂ , Ŝx ] = [ X̂ ⊗ 1, 1 ⊗ Ŝx ] = 0), this
X̂ Ŝ +i P̂ Ŝx +i X̂ Ŝy − P̂ Ŝy + X̂ Ŝx −i X̂ Ŝy −i P̂ Ŝx − P̂ Ŝy √
becomes Ŝ + â + ↠Ŝ − = x √
2
= 2( X̂ Ŝx − P̂ Ŝ y ).
Index

A Discrete sine transform, 98, 132


Airy function, 103 Double slit experiment, 33
Angular momentum, 51, 52
Arnoldi algorithm, 24
E
Electron, 54
B Energy gap, 72
Basis set, 34, 44 Entanglement, 77
construction, 42 Entropy of entanglement, 77
finite-resolution position basis, 96
incomplete, 35
momentum basis, 94 F
position basis, 92 Fast Fourier transform, 85, 98
Bohr magneton, 54, 57 Fermion, 122, 126
Boltzmann constant, 119 Fock basis, 149
Bose–Einstein condensate, 117, 130 Fortran, 21
Boson, 122, 126
G
g-factor, 54, 57
C Gravitational acceleration, 103
C, 7, 8, 21 Gravity well, 102
Chemical potential, 117 Gross–Pitaevskii equation, see Schrödinger
Clebsch–Gordan coefficient, 3, 67 equation, non-linear
Completeness relation, 34, 35, 92
Contact interaction, 122
Correlations, 75 H
Coulomb interaction, 130 Harmonic oscillator, 43
truncated, 130 Heisenberg model, 79
Heisenberg principle, see uncertainty princi-
ple
D Hilbert space, 34, 35, 37, 42, 56, 72, 138
Decoherence, 61 Hydrogen, 43
Density matrix, 108 Hyperfine interaction, 57
reduced, see partial trace
Detuning, 64
Dicke states, 42, 43, 52 I
Discrete Fourier transform, 85 Imaginary-time propagation, 119
© Springer Nature Singapore Pte Ltd. 2020 191
R. Schmied, Using Mathematica for Quantum Mechanics,
https://doi.org/10.1007/978-981-13-7588-0
192 Index

Interaction, 43, 122 sparse matrix, 23, 53


Interaction picture, 40 minimization, 61
Ising model, 68, 79 module, 9
nesting function calls, 112
numerical evaluation, 7
J outer product, 132
Java, 7, 8, 21 pattern, 11, 13, 20, 23
Jaynes–Cummings model, 148 alternative, 78
physical units, 29
plotting, 58, 63, 124
K postfix notation, 6
Kinetic energy, see operator, kinetic prefix notation, 6
random number, 4, 12
recursion, 19, see also recursion
L remembering results, 12
Lagrange multiplier, 117 replacements, 15
Lanczos algorithm, 24 rules, 15
Level shift, 67 saving definitions, 13
Light shift, 67 timing a calculation, 7, 53
tracing, 17
units, see physical units
M variable, 4
Magnetic field, 54, 56 vector, 21, 35
Magnetization, 74 normalize, 120
Magnus expansion, 38 orthogonal vectors, 58
Mandelbrot set, 3 Matlab, 21
Mathematica, 1 Mean-field interaction, 117
anonymous function, 9, 19, 108 Memoization, 12
assumptions, 29 Momentum operator, see operator, momen-
brackets, 5, 21, 22 tum
complex number, 28 Moore’s law, 72
conditional execution, 8
debugging, 17
delayed assignment, 5, 12 N
differential equation, 63 Nuclear spin, 57
evaluation, 17 Nyquist frequency, 108
factorial, 19
fixed point of a map, 120
front end, 2 O
full form, 18, 29 Operator, 34, 45
function, 6, 11 kinetic, 43, 95, 146
functional programming, 9, 19, 20, 132 momentum, 101, 146, 150
immediate assignment, 4, 11 position, 150
kernel, 2 potential, 43
Kronecker product, 44, 45, 57, 70, 124 Oscillating field, 61
list, 6, 21
loop, 7, 19, 20
matrix, 22, 35 P
eigenvalues, 24, 55, 58 Partial trace, 46, 77, 143
eigenvectors, 24, 55, 58 Path integral, 72
exponential, 39, 41, 42 Pauli matrices, 36, 53, 54
identity matrix, 46, 53 Planck’s constant, 57, 131
matrix exponential, 110 Plane wave, 43
printing, 7, 22, 53 Potential energy, see operator, potential
Index 193

Product state, 45, 70 Split-step method, 99, 112, 115, 118


Propagator, 38, 110 Square well, 43
Pseudospin, 152 Stark shift
Pseudovector, 52 ac, 67
Python, 7, 21 Stern–Gerlach experiment, 33
Sturm–Liouville theorem, 95

Q
Quantum circuit, 80 T
Quantum Fourier transform, 85 Tensor, 26
Quantum gate, 80 contraction, 27, 47
Quantum information, 61 product, 43, 57, 69, 123, 137
Quantum phase estimation, 87 Tensor networks, 72
Quantum phase transition, 73 Thomas–Fermi approximation, 135
Quantum state, 34, 44 Transition matrix elements, 62
Quantum state tomography, 84 Trotter expansion, 110, 116
Qubit, 61, 80

U
R Uncertainty principle, 33
Rabi frequency, 65
Rashba coupling, 145
Real-space dynamics, 92, 137 V
Reciprocal lattice, 43 Von Neumann entropy, 77
Reduced density matrix, see partial trace
Rotating-wave approximation, 64
Rotation, 53 W
Rubidium-87, 56, 131 Wigner distribution, 107, 151
magic field, 61 Wolfram language, 1

S X
s-wave scattering, 117, 123, 131 XY model, 79
Schrödinger equation
non-linear, 116, 117
time-dependent, 38, 40, 62, 109, 115 Z
time-independent, 36, 55 Zeeman shift
Spherical harmonics, 43 ac, 67
Spin, 43, 51, 56, 137 dc, 60

Potrebbero piacerti anche