Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Statistical Mechanics
K. P. N. Murthy,
Adjunct Faculty, School of Basic Sciences
Whatever happened,
happened for good.
Whatever is happening,
is happening for good.
Whatever will happen,
will happen for good.
Bhagavat Gita
"· · · Ludwig Boltzmann, who spent much of his life studying sta-
tistical mechanics, died in 1906, by his own hand. Paul Ehrenfest,
carrying on the work, died similarly in 1933. Now it is our turn ....
to study statistical mechanics. Perhaps it will be wise to approach
the subject rather cautiously.· · · "
David Goodstein, States Matter, Dover (1975) (opening lines)
All models are wrong, some are useful. George E P Box
Contents
Quotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIII
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI
1 Micro-Macro Synthesis 1
1.1 Aim of Statistical Mechanics . . . . . . . . . . . . . . . . . . . 1
1.2 Micro World : Determinism and Time-Reversal Invariance . . 4
1.3 Macro World : Thermodynamics . . . . . . . . . . . . . . . . 4
1.4 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Extra Reading : Books . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Extra Reading : Papers . . . . . . . . . . . . . . . . . . . . . 13
2 Maxwell’s Mischief 15
2.1 Experiment and Outcomes . . . . . . . . . . . . . . . . . . . . 15
2.2 Sample space and events . . . . . . . . . . . . . . . . . . . . . 16
2.3 Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Rules of probability . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Random variable . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Maxwell’s mischief : Ensemble . . . . . . . . . . . . . . . . . . 19
2.7 Calculation of probabilities from an ensemble . . . . . . . . . 20
2.8 Construction of ensemble from probabilities . . . . . . . . . . 20
2.9 Counting of the elements in events of the sample space : Coin
tossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.10 Gibbs ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.11 Why should a Gibbs ensemble be large ? . . . . . . . . . . . . 23
III
IV CONTENTS
12 ASSIGNMENTS 187
12.1 Assignment-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
12.2 Assignment-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.3 Assignment-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.4 Assignment-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
12.5 Assignment-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
12.6 Assignment-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
12.7 Assignment-8 . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
12.8 Assignment-9 . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
INDEX 205
VIII CONTENTS
List of Figures
N!
3.1 Binomial distribution : B(n; N) = pn (1 − p)N −n with
n!(N − n)!
N = 10; B(n; N) versus n; depicted as sticks; (Left) p = 0.5;
(Right) p = .35. . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Poisson distribution with mean µ, depicted as sticks. Gaussian
distribution with mean µ and variance σ 2 = µ depicted by con-
tinuous line.
(Left) µ = 1.5; (Right) µ = 9.5. For large µ Poisson and Gaus-
sian coincide . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1 Two ways of keeping a particle in a box divided into two equal
parts. ǫ = V /2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Four ways of keeping two distinguishable particles in a box di-
vided into two equal halves. ǫ = V /2. . . . . . . . . . . . . . . 42
4.3 f (x; ǫ) versus x. (Top Left) ǫ = 2; (Top Right) ǫ = 1; (Bottom
Left) ǫ = 1/2; (Bottom Right) Θ(x) . . . . . . . . . . . . . . . 48
df
4.4 g(x; ǫ) = versus x. (Top Left) ǫ = 2; (Top Right) ǫ = 1;
dx
(Bottom Left) ǫ = 1/2; (Bottom Right) ǫ = 1/4 . . . . . . . . 49
4.5 µ versus T for a classical ideal gas. µ = −(3T /2) ln(T ). We
have set kB = h = ρ/(2πm)3/2 = 1 . . . . . . . . . . . . . . . . 59
IX
X LIST OF FIGURES
XI
XII LIST OF TABLES
Micro-Macro Synthesis
1
1.1 Aim of Statistical Mechanics
Statistical mechanics provides a theoretical bridge that takes you from the
micro world1 , to the macro world2 . The chief architects of the bridge were
Ludwig Eduard Boltzmann, James Clerk Maxwell, Josiah Willard Gibbs and
Albert Einstein.
Statistical Mechanics gives us a machinery to derive the macroscopic prop-
erties of an object from the properties of its microscopic constituents and the
interactions amongst them. It tries to provide a theoretical basis for the em-
pirical thermodynamics - in particular the time-asymmetric Second law - from
the time-symmetric microscopic laws of classical and quantum mechanics.
The answer depends crucially on the object under study and the properties
under investigation. For example,
• if we are interested in properties like density, pressure, temperature etc.
of a bottle of water, then the molecules of water are the microscopic
constituents; the bottle of water is the macroscopic object.
1
of Newton, Schrödinger, Maxwell, and a whole lot of other scientists who developed the
subjects of classical mechanics, quantum mechanics, and electromagnetism
2
of Clausius, Kelvin, Helmholtz and several others responsible for the birth and growth
of thermmodynamics
1
2 1. MICRO-MACRO SYNTHESIS
Statistical mechanics asserts that if you know the properties of the micro-
scopic constituents and how they interact with each other, then you can make
predictions about its macroscopic behaviour.
The first and the most important link, between the micro(scopic) and the
macro(scopic) worlds is given by,
b
S = kB ln Ω(E, V, N). (1.1)
3
engraved on the tomb of Ludwig Eduard Boltzmann (1844-1906) in Zentralfriedhof,
Vienna.
4
In thermodynamics the definition of entropy comes from the equation : dS = d¯Q/T ,
where dS is the change in entropy of a macroscopic system when it transacts d¯Q of energy
by heat in an infinitesimal process during which the temperature remains constant at T .
d¯Q is not an exact differential; the heat transacted depends on the process. However dS is
an exact differential and the entropy S is a property of the system.
5
For example an ordered set of six numbers, three positions and three momenta specify a
single particle. An ordered set of 6N numbers specify a macroscopic system of N particles.
The string labels a micro state.
6
kB = 1.381 × 10−23 joules (kelvin)−1 . We have kB = R/A where R = 8.314 joules
(kelvin)−1 is the universal gas constant and A = 6.022 × 1023 (mole)−1 is the Avogadro
number.
7
Note that the quantity ln Ω b is a number. Claussius’ entropy is measured in units of
joules/kelvin. Hence the Boltzmann constant is kB and has units joules/kelvin. It is a
conversion factor and helps you go from one unit of energy (joule) to another unit of energy
(kelvin).
1.1. AIM OF STATISTICAL MECHANICS 3
The micro states are labelled by i and the sum is taken over all the micro
states of the macroscopic system; The probability is denoted by pi .
An interesting issue : Normally we resort to the use of probability only when we have
inadequate data about the system or incomplete knowledge of the phenomenon under study.
Such an enterprise, hence, is sort of tied to our ignorance. Keeping track of the positions
and momenta of some 1030 molecules via Newton’s equations is undoubtedly an impossible
task. It was indeed Boltzmann who correctly identified that macroscopic phenomena are
tailor-made for a statistical description. It is one thing to employ statistics as a convenient
tool to study macroscopic phenomena but it is quite another thing to attribute an element
of truth to such a description. But then this is what we are doing precisely in Eq. (1.2)
where we express entropy, which is a property of an object, in terms of probabilities which
are related to the ignorance of the observer. It is definitely a bit awkward to think that a
property of an object is determined by what we know or what we do not know about it! But
remember in quantum mechanics the observer, the observed, and the observation are all tied
together : the act of measurement anchors the system, at the time of observation, in one of
the eigenstates of the observable; we can at best make some probabilistic statements about
the eigenstate into which, the wave function of the system would collapse. For a discussion
on this subtle issue see the beautiful tiny book of Carlo Rovelli8 .
Two important entities we come across in thermodynamics are heat and
work. They are the two means by which a thermodynamic system transacts
energy with its surroundings or with another thermodynamic system. The
equation that provides a microscopic description of heat is
X
q= Ei dpi. (1.3)
i
The sum runs over all the micro states. Ei is the energy of the system when
it is in micro state i. The probability that the system can be found in micro
P
state i given by pi . We need to impose an additional constraint that i dpi
is zero to ensure that the total probability is unity.
The statistical description of work is given by
X
W = pi dEi . (1.4)
i
Ei is the energy of the system when it is in the micro state labelled by the
index i.
An important thermodynamic property of a closed system is Helmholtz
free energy given by,
F = −kB T ln Q(T, V, N). (1.5)
8
Carlo Rovelli, Seven Brief Lessons in Physics, Translated by Simon Carnell and Erica
Segre, Allen Lane an imprint of Penguin Books (2015)pp.54-55.
4 1. MICRO-MACRO SYNTHESIS
• The second law tells us that come what may, an engine can not de-
liver work equivalent to the energy it has drawn by heat from a heat
source. However an engine can draw energy by work and deliver exactly
equivalent amount by heat. The second law is a statement of this basic
asymmetry between "heat→ work" and "work→ heat" conversions. The
Second law provides a basis for the thermodynamic property called en-
tropy. The entropy of a system increases by dS = d¯q/T when it absorbs
reversibly d¯q amount of energy by heat at constant temperature. The
second law says that in any process entropy increases or remains constant
: ∆S ≥ 0.
• The third law tells that entropy vanishes at absolute zero tempera-
ture. Notice that in thermodynamics only change in entropy is defined
: dS = d¯q/T . Since dS is an exact differential we can assert S as a
thermodynamic variable and it describes a property of the system. The
third law demands that this property is a constant (usually set to zero)
at absolute zero and hence we can assign a value S to the object at any
temperature. We can say the third law provides a basis for absolute zero
temperature on the entropy scale.
1.4 Books
• R K Pathria, Statistical Mechanics, Second Edition, Butterworth -
Heinemann (1996). A popular book. Starts with a beautiful historical account
of the subject. Contains a very large collection of interesting, non-trivial problems,
some of which are taxing ! You will learn how to do statistical mechanics from this
book.
I would recommend retain your copy of the first edition of this book. Huang has
omitted in his second edition, several interesting discussions present in the first edi-
tion.
• Joon Chang Lee, Thermal physics - Entropy and Free Energies, World
Scientific (2002). Joon Chang Lee presents statistical thermodynamics in an un-
orthodox and distinctly original style. The presentation is so simple and so beautiful
that you do not notice that the book is written in an awful English and at several
places, the language is flawed.
A relatively inexpensive, Wiley student edition of the book is available in the Indian
market. Buy your copy now !
• P Atkins, Four Laws that drive the Universe, Oxford university Press
(2007).
• S G Brush, The kind of motion we call heat, Book 1 : Physics and the
Atomists Book 2 : Statistical Physics and Irreversible Processes, North
Holland Pub. (1976)
15
16 2. MAXWELL’S MISCHIEF
2.3 Probabilities
Probability is defined for an event. What is the probability of the event {H}
in the toss of a coin ? One-half. This would be your immediate response. The
logic is simple. There are two outcomes : "Heads" and "Tails". We have no
reason to believe why should the coin prefer "Heads" over "Tails" or vice versa.
Hence we say both outcomes are equally probable. What is the probability
of having at least one "H" in a toss of two coins ? The event corresponding
this statement is {HH, HT, T H} and contains three elements. The sample
size contains four elements. The required probability is thus 3/4. All the
four outcomes are equally probable 1 . Then the probability of an event is the
1
Physicists have a name for this. They call it the axiom (or hypothesis or assumption) of
Ergodicity. Strictly ergodicity is not an assumption; it is absence of an assumption required
for assigning probabilities to events.
2.4. RULES OF PROBABILITY 17
p ≥ 0
p(A ∪ B) = p(A) + p(B) − p(A ∩ B)
p(φ) = 0 p(Ω) = 1 (2.1)
Complex number, for a complex random variable3 . Thus the random variable
x = X(ω) is a set function.
Consider a continuous random variable x = X(ω). We define a probability
density function f (x) by
In other words f (x)dx is the probability of the event (measurable subset) that
contains all the outcomes to which we have attached a real number between x
and x + dx.
Now consider a continuous random variable defined between a to b with
a < b. We define a quantity called the "average" of the random variable x as
Z b
µ= dx x f (x).
a
6
I remember I have seen this method described in a book on Quantum Mechanics by
2.9. COUNTING OF ELEMENTS OF AN EVENT 21
Take one outcome belonging to the event Ω(n1 , n2 ; N). There will be
n1 ’Heads’ and n2 ’Tails" in that outcome . Imagine for a moment that
all these ’Heads’ are distinguishable. If you like, you can label them as
H1 , H2 , · · · , Hn1 . Carry out permutation of all the ’Heads’ and produce
n1 ! new configurations. From each of these new configurations, produce n2 !
configurations by carrying out the permutations of the n2 ’Tails’. Thus from
one outcome belonging to Ω(n1 , n2 ; N), we have produced n1 ! × n2 ! new con-
figurations. Repeat the above for each element of the set Ω(n1 , n2 ; N), and
b
produce Ω(n 1 , n2 ; N)n1 !n2 ! configurations. A moment of thought will tell you
that this number should be the same as N!. Therefore
b
Ω(n1 , n2 ; N) × n1 ! × n2 ! = N!
Let us work out an example explicitly to illustrate the above. Consider toss-
ing of five coins. There are 25 = 32 outcomes/microstates listed below. The
number in the brackets against each outcome denotes the number of "Heads"
in that outcome.
1. H H H H H (5) 17. T H H H H (4)
2. H H H H T (4) 18. T H H H T (3)
3. H H H T H (4) 19. T H H T H (3)
4. H H H T T (3) 20. T H H T T (2)
5. H H T H H (4) 20. T H T H H (3)
6. H H T H T (3) 21. T H T H T (2)
7. H H T T H (3) 22. T H T T H (2)
8. H H T T T (2) 23. T H T T T (1)
9. H T H H H (4) 25. T T H H H (3)
10. H T H H T (3) 26. T T H H T (2)
11. H T H T H (3) 27. T T H T H (2)
12. H T H T T (2) 28. T T H T T (1)
13. H T T H H (3) 29. T T T H H (2)
14. H T T H T (2) 30. T T T H T (1)
15. H T T T H (2) 31. T T T T H (1)
16. H T T T T (1) 32. T T T T T (0)
The outcomes numbered 4, 6, 7, 10, 11, 13, 18, 19, 20, 25 are the ones
with three "Heads" and two "tails"). These are the elements of the event
Ω(n1 = 3; n2 = 2; N = 5).
Take outcome No. 4. Label the three heads as H1 , H2 and H3 . Carry out
permutations of the three "Heads" and produce 3! = 6 elements. These are
(4) H H H T T
H2 H3 H1 T T
H1 H2 H3 T T
H3 H1 H2 T T
H1 H3 H2 T T
H3 H2 H1 T T
H2 H1 H3 T T
Take an element from the above set. Label the ’Tails’ as T1 and T2 . Carry
out permutation of the ’Tails’ and produce 2! = 2 elements. Do this for each
of the above six elements.
Thus from the outcome No, 4, we have produced 3! × 2! = 12 outcomes,
listed below.
(4) H H H T T
H2 H3 H1 T1 T2
H1 H2 H3 T1 T2
H2 H3 H1 T2 T1
H1 H2 H3 T2 T1
H3 H1 H2 T1 T2
H1 H3 H2 T1 T2
H3 H1 H2 T2 T1
H1 H3 H2 T2 T1
H3 H2 H1 T1 T2
H2 H1 H3 T1 T2
H3 H2 H1 T2 T1
H2 H1 H3 T2 T1
Repeat the above exercise on the outcomes numbered 6, 7, 10, 11, 13, 18,
19, 20, 25. Table below depicts the results for outcome no. 6.
(6) H H T H T
H2 H3 T1 H1 T2
H1 H2 T1 H3 T2
H2 H3 T2 H1 T1
H1 H2 T2 H3 T1
H3 H1 T1 H2 T2
H1 H3 T1 H2 T2
H3 H1 T2 H2 T1
H1 H3 T2 H2 T1
H3 H2 T1 H1 T2
H2 H1 T1 H3 T2
H3 H2 T2 H1 T1
H2 H1 T2 H3 T1
b 5!
Ω(n1 = 3, n2 = 2; N = 5) = = 10
3!2!
b
Ω(n 7
1 , n2 ; N) is called the binomial coefficient
The sum runs over all possible values of {n1 , n2 }. The superscript ⋆ on the summation
sign should remind us that only those values of n1 and n2 consistent with the constraint
n1 + n2 = N are permitted.
8
For example the given coin is a system. Let p denote the probability of "Heads" and
q = 1 − p the probability of "tails". The coin can be in a micro state "Heads" or in a micro
state "Tails".
2.11. WHY SHOULD A GIBBS ENSEMBLE BE LARGE ? 23
ensemble are in the same macro state9 . However they can be in different micro
states. Let the micro states of the system under consideration, be indexed by
{i = 1, 2, · · · }. The number of elements of the ensemble in micro state j
divided by the size of the ensemble equals t the probability of the system to
be in micro state j. It is intuitively clear that the size of the ensemble should
be large (→ ∞) so that it can capture exactly the probabilities of different
micro states of the system10 . Let me elaborate on this issue, see below.
b m (N) = Ω(n
b N!
Ω 1 = n2 = N/2; N) = (2.4)
(N/2)! (N/2)!
12
approximation : N! = N N exp(−N) and get
b m (N) = = N N exp(−N) N
Ω 2 = 2 (2.5)
[(N/2)(N/2) exp(−N/2)]
The above implies that almost all the outcomes of the experiment belong to the
event with equal number of ’Heads’ and ’Tails’. The outcomes with unequal
number of ’Heads’ and ’Tails’ are so overwhelmingly small that the difference
falls within the small error arising due to the first Stirling approximation.
b
For estimating the tiny difference between Ω(N) b m (N), let us employ
and Ω
12
First Stirling Approximation : N ! ≈ N N exp(−N ).
We have,
N ! = N × (N − 1) × · · · × 3 × 2 × 1
ln N ! = ln 1 + ln 2 + ln 3 + · + ln N
XN Z N
N
= ln(k) ≈ ln x dx = (x ln x − x)1 = N ln N − N − 1
k=1 1
≈ N ln N − N
N ! ≈ N N exp(−N )
2.11. WHY SHOULD GIBBS ENSEMBLE BE LARGE ? 25
√
the second Stirling approximation13 : N! = N N exp(−N) 2πN and get
s
b m (N) b N 2
Ω = Ω(n 1 = n2 = N/2; N) = 2 × .
πN
Thus we have,
s
b m (N)
Ω 2
b
= (2.6)
Ω(N) πN
1
∝ √ (2.7)
N
b
Let us define S(Ω(n1 , n2 ; N)) = ln Ω(n1 , n2 ; N) as entropy of the event
Ω(n1 , n2 ; N). For n1 = n2 = N/2 we get an event with largest entropy. Let us
denote it by the symbol SB (N).
Let SG = S(Ω(N)) = N ln 2 denote the entropy of the sure event. It
is the logarithm of the number of outcomes of the experiment of tossing N
independent and fair coins.
13
A better approximation
√ to large factorials is given by Stirling’s second formula : N ! ≈
N
N exp(−N ) 2πN
We have
Z ∞ Z ∞
N −x
Γ(N + 1) = N ! = dx x e = dx eF (x)
0 0
F (x) = N ln x − x
N N
F ′ (x) = − 1 ; F ′′ (x) = − 2
x x
Set F ′ (x) = 0; this gives x⋆ = N . At x = x⋆ the function F (x) is maximum. (Note :
F ′′ (x = x⋆ ) is negative). Carrying out Taylor expansion and retaining only upto quadratic
terms, we get,
(x − x⋆ )2 ′′ (x − N )2
F (x) = F (x⋆ ) + F (x = x⋆ ) = N ln N − N −
2 2N
We have,
Z ∞ Z ∞
F (x) N −N (x − N )2
N! = dx e = N e dx exp −
0 0 2N
√ Z ∞
= N N e−N N √ dx exp(−x2 /2)
− N
√
∼
N →∞ N N e−N 2πN
26 2. MAXWELL’S MISCHIEF
ǫ = 1/100; Pout = !
N N
N P 2
(1 − ǫ) ≤ n ≤ 2
(1 + ǫ)
103 0.752
104 0.317
105 0.002
106 1.5 × 10−23
107 2.7 × 10−2174
2.11. WHY SHOULD GIBBS ENSEMBLE BE LARGE ? 27
{T, H, H, H, T, H, H, T, H, T }.
Notice that the ensemble contains ten elements. Six elements are H and four
are T. This is consistent with the given probabilities: P(H) = 6/10; and
P(T ) = 4/10.
However a Gibbs ensemble is constructed by actually tossing N identical
and independent coins. In the limiet N → ∞, sixty percent of the coins shall
be in micro state H and forty, T. To ensure this we need to take the size of
the ensemble N, to be very large. How large ? You will get an answer to this
question in what follows.
Let us say, we attempt to construct the ensemble by actually carrying out
the experiment of tossing identical coins or by tossing the same coin several
times independently. What is the probability that in the experiment there
shall be n1 ’Heads’ and hence n2 = (N − n) ’Tails’ ? Let us denote this by the
29
30 3. BINOMIAL, POISSON, AND GAUSSIAN DISTRUBUTIONS
0.25
0.3
0.2
0.25
0.15
B(n)
B(n)
0.2
0.1 0.15
0.1
0.05
0.05
0 0
0 5 10 0 5 10
n n
N!
Figure 3.1: Binomial distribution : B(n; N) = pn (1 − p)N −n with
n!(N − n)!
N = 10; B(n; N) versus n; depicted as sticks; (Left) p = 0.5; (Right) p = .35.
also called the mean, the first moment, the expectation value etc. is denoted
by the symbol hni and is given by,
N
X N
X N!
hni = n B(n; N) = n pn q N −n ,
n=0 n=1 n! (N − n)!
N
X (N − 1)!
= Np pn−1 q N −1−(n−1) ,
n=1 (n − 1)! [N − 1 − (n − 1)]!
N
X −1
(N − 1)!
= Np pn q N −1−n ,
n=0 n!(N − 1 − n)!
= Np(p + q)N −1 = Np.
3.1. BINOMIAL DISTRIBUTION 31
We have,
hn(n − 1)i = N(N − 1)p2 ,
hn2 i − hni = N 2 p2 − Np2 ,
hn2 i = N 2 p2 − Np2 + Np,
σn2 = hn2 i − hni2 = Npq. (3.4)
The square-root of variance is called the standard deviation. A relevant quan-
tity is the relative standard deviation. It is given by the ratio of the standard
deviation to the mean. For the Binomial random variable, we have,
s
σn 1 q
= √ . (3.5)
hni N p
√
The relative standard deviation is inversely proportional to N. It is small
for large N. It is clear that the number of elements N, in a Gibbs ensemble
32 3. BINOMIAL, POISSON, AND GAUSSIAN DISTRUBUTIONS
N
X
z Be ′ (z) = n z n B(n). (3.8)
n=0
d2 Be XN
= n(n − 1)z n−2 B(n),
dz 2 n=0
d2 Be XN
z2 = z n n(n − 1) B(n). (3.10)
dz 2 n=0
3.3. BINOMIAL → POISSON 33
d2 Be
2
= N(N − 1)(q + zp)N −2 p2 , (3.15)
dz
d2 Be
hn(n − 1)i = = N(N − 1)p2 . (3.16)
dz 2 z=1
The table below gives the probabilities calculated from the Binomial distribu-
tion. Consider the same problem with v = 10−3 M 3 and N = 105 . We have
0 0.0001 6 0.2508
1 0.0016 7 0.2150
2 0.0106 8 0.1209
3 0.0425 9 0.0403
4 0.1115 10 0.0060
5 0.2007 − −
Np = Nv/V = ρv = constant.
We shall show below that in this limit, Binomial goes over to Poisson
distribution.
e
B(z) = (q + zp)N . (3.18)
1
Note that for a physicist, large is infinity and small is zero.
3.4. POISSON DISTRIBUTION 35
!N
N zp
= (1 − p) 1+ , (3.19)
q
N !N
1 zNp 1
= 1 − Np 1+ . (3.20)
N q N
In the above replace Np by µ and q by 1 to get,
N N
e µ zµ
B(z) = 1− 1+ .
N N
In the limit of N → ∞ we have by definition2 ,
e
B(z) ∼ exp(−µ) exp(zµ),
= Pe (z). (3.21)
e
Thus in the limit N → ∞, p → 0 and Np = µ, we find B(z) → Pe (z), given
by
3
We shall come across Poisson distribution in the context of Maxwell-Boltzmann statis-
tics. Let nk denote the number of ’indistinguishable classical’ particles in a single-particle
state k. The random variable nk is Poisson-distributed.
36 3. BINOMIAL, POISSON, AND GAUSSIAN DISTRUBUTIONS
0.35 0.14
0.3 0.12
0.25 0.1
0.2 0.08
0.15 0.06
0.1 0.04
0.05 0.02
0 0
−5 0 5 0 5 10 15 20
Z ∞ Z ∞ Z ∞
= dx1 dx2 · · · dxN ,
−∞ −∞ −∞
x1 + x2 + · · · xN
exp −ik f (x1 , x2 , · · · xN ). (3.28)
N
The random variables are independent. Hence
We have,
Z ∞ Z ∞
φY (k) = dx1 exp(−ikx1 /N)f (x1 ) dx2 exp(−ikx2 /N)f (x2 ) · · · ,
−∞ −∞
Z ∞
··· dxN exp(−ikxN /N)f (xN ),
−∞
Z ∞ N
= dx exp(−ikx/N)f (x) .
−∞
Carry out the power series expansion of the exponential function and get,
" ∞
#
(−ik)n
X
Pe (k; µ) = exp µ . (3.30)
n=1 n!
We recognise the above as the cumulant expansion of a distribution for which
all the cumulants are the same µ. For large value µ it is adequate to consider
only small values of k. Hence we retain only terms upto quadratic in k. Thus
for k small, we have,
" #
k2
Pe (k) = exp −ikµ − µ . (3.31)
2!
The above is the Fourier transform or the characteristic function of a Gaussian
random variable with mean as µ and variance also as µ.
Thus in the limit µ → ∞, Gaussian distribution with mean and variance
both equal to µ is a good approximation to Poisson distribution with mean µ,
see Fig. 2.
3.9 Gaussian
A Gaussian of mean µ and variance σ 2 is given by
" #
1 (x − µ)2
G(x) = √ exp − . (3.32)
σ 2π 2σ 2
The characteristic function is given by the Fourier transform formally ex-
e R +∞
pressed as G(k) = −∞ dx exp(−ikx)G(x). The integral can be worked out,
40 3. BINOMIAL, POISSON, AND GAUSSIAN DISTRUBUTIONS
e
and I leave it as an exercise. We get, G(k) = exp [−ikµ − k 2 σ 2 /2] . Consider
a Gaussian of mean zero and variance σ 2 . It is given by
" #
1 1 x2
g(x) = √ exp − 2 . (3.33)
σ 2π 2σ
The width of the Gaussian distribution is 2σ. The Fourier transform of g(x)
is denoted ge(k) and is given by
1
ge(k) = exp − k 2 σ 2 . (3.34)
2
The Fourier transform is also a Gaussian with zero mean and standard devi-
ation 1/σ 2 . The width of ge(k) is 2/σ. The product of the width of g(x) and
the width of its Fourier transform ge(k) is 4.
If g(x) is sharply peaked then its Fourier transform ge(k) will be broad and
vice versa.
Isolated System: Micro canoni-
4
cal Ensemble
4.1 Preliminaries
Our aim is to study an isolated system of N number of point particles with
an energy E and confined to a volume V . The particle do not interact with
each other. We shall see how to count the number of micro states, denoted
by the symbol Ω,b of the system. Ωb will be, in general, a function of energy E,
volume V and the number of particles N. We shall carry out both classical as
well as quantum counting of the micro states and find that both lead to the
same expressions for entropy.
Before we address the full problem, we shall consider a simpler problem
of counting the micro states taking into account only the spatial coordinates
neglecting completely the momentum coordinates. Despite this gross simplifi-
cation, we shall discover that the machinery of statistical mechanics helps us
derive the ideal gas law1 .
1
I must tell you of a beautiful derivation of the ideal gas law by Daniel Bernoulli (1700-
1782). It goes as follows. Bernoulli imagined air to be made of billiard balls all the time in
motion, colliding with each other and with the walls of the container. When a billiard ball
bounces off the wall, it transmits a certain momentum to the wall and Bernoulli imagined
it as pressure. It makes sense. First consider air contained in a cube of side one meter.
There is a certain amount of pressure felt by the wall. Now imagine the cube length to be
doubled with out changing the speeds of the molecule. In modern language this assumption
is the same as keeping the temperature constant. The momentum transferred per collision
41
42 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
Figure 4.1: Two ways of keeping a particle in a box divided into two equal
parts. ǫ = V /2
b V b = k ln(2). Now
We have Ω(V, N = 1, ǫ = V /2) = = 2, and S = kB ln Ω B
ǫ
consider two distinguishable particles into these two cells each of volume ǫ =
V /2, see Fig. (4.2).
Figure 4.2: Four ways of keeping two distinguishable particles in a box divided
into two equal halves. ǫ = V /2.
2
b V
We then have Ω(V, N = 2, ǫ = V /2) = = 4, and
ǫ
remains the same. However since each billiard ball molecule has to travel twice the distance
between two successive collisions with the wall, the force on the wall should be smaller by
a factor of two. Also pressure is force per unit area. The area of the side of the cube is
four times more now. Hence the pressure should be less by a further factor of four. Taking
into account both these factors, we find the pressure should be eight times less. We also
find the volume of cube is eight times more. From these considerations, Bernoulli concluded
that the product of pressure and volume must be a constant when there is no change in the
molecular speeds - a brilliant argument based on simple scaling ideas.
4.3. IDEAL GAS LAW : DERIVATION 43
N
b V
Ω(V, N, ǫ = V /2) = = 2N , (4.1)
ǫ
b = Nk ln(2).
S = kB ln Ω (4.2)
B
Let us now divide the volume equally into V /ǫ parts and count the number
N
b V
of ways or organizing N (distinguishable) particles. We find Ω(V, N) = ,
ǫ
and
b = Nk ln(V /ǫ)
S = kB ln Ω B
= NkB ln V − NkB ln ǫ. (4.3)
PV = NkB T. (4.6)
dU = T dS − P dV. (4.8)
44 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
We have then,
∂U
T = ; (4.9)
∂S V
∂U
P =− . (4.10)
∂V S
Let us now consider S ≡ S(U, V ), a natural starting point for statistical me-
chanics. We have,
∂S ∂S
dS = dU + dV (4.11)
∂U V ∂V U
1 P
dS = dU + dV. (4.12)
T T
Equating the pre-factors of dU and dV in the above two equation, we get,
∂S 1
= ; (4.13)
∂U V T
∂S P
= . (4.14)
∂V U T
NkB
dS = dV. (4.15)
V
Employing the equation of state : P V = NkB T , which we have derived, we
can rewrite the above as
P dV
dS = . (4.16)
T
Consider an isothermal process in an ideal gas. We have dU = 0. This
implies T dS = P dV . When the system absorbs a certain quantity of heat q
4.5. SOME ISSUES ON EXTENSITIVITY OF ENTROPY 45
b
S(V, N) = kB ln Ω(V, N), (4.20)
= kB [N ln V − N ln N + N − N ln ǫ] , (4.21)
V
= NkB ln + NkB − NkB ln ǫ. (4.22)
N
46 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
2
Desparate and often elaborate patch work are not new to physicists. They have always
indulged ’papering’ when cracks appear in their understanding of science. A spectacular
example is the entity aether proposed to justify the wave nature of light; Maxwell’s work
showed light is a wave. Waves require medium for propagation. Hence the medium aether,
with exotic properties, was porposed to carry light.
4.8. HEAVISIDE AND HIS Θ FUNCTION 47
where ǫ > 0.
Define
lim.
Θ(x) = ǫ → 0 f (x; ǫ).
Θ(x) is called the step function, Heaviside step function, unit step function or
theta function. It is given by,
0 for −∞ ≤ x < 0;
Θ(x) = (4.26)
1 for 0 < x ≤ +∞.
Figure (4.3) depicts f (x; ǫ) for ǫ = 2, 1, 1/2 and the theta function obtained
in the limit of ǫ → 0.
1 1
0.8 0.8
0.6 0.6
0.4
0.4
0.2
0.2
0
0
−2 −1 0 1 2 −2 −1 0 1 2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−2 −1 0 1 2 −2 −1 0 1 2
4 4
3 3
2 2
1 1
0 0
−2 −1 0 1 2 −2 −1 0 1 2
4 4
3 3
2 2
1 1
0 0
−2 −1 0 1 2 −2 −1 0 1 2
df
Figure 4.4: g(x; ǫ) = versus x. (Top Left) ǫ = 2; (Top Right) ǫ = 1;
dx
(Bottom Left) ǫ = 1/2; (Bottom Right) ǫ = 1/4
We find that the integral is the same for all values of ǫ. This gives us an
important property of the Dirac-delta function:
Z +∞
dx δ(x) = 1. (4.30)
−∞
Let
yi = xi /R for i = 1, 2.
Then,
Z Z 2
!
+∞ +∞ X
2 2
V2 (R) = R dy1 dy2 Θ R (1 − yi2 . (4.32)
−∞ −∞ i=1
We have
Θ(λx) = θ(x) ∀ λ > 0.
Therefore,
Z Z 2
!
+∞ +∞ X
V2 (R) = R2 dy1 dy2 Θ 1 − yi2 (4.33)
−∞ −∞ i=1
= R2 V2 (R = 1). (4.34)
We can now write Eq. (4.31) as
Z Z 2
!
+∞ +∞ X
2 2
V2 (R = 1)R = dx1 dx2 Θ R − x2i . (4.35)
−∞ −∞ i=1
Now differentiate both sides of the above equation with respect to the variable
R. We have already seen that the derivative of a Theta function is the Dirac-
delta function. Therefore
Z Z 2
!
+∞ +∞ X
2
V2 (R = 1)2R = 2R dx1 dx2 δ R − x2i . (4.36)
−∞ −∞ i=1
4.11. VOLUME OF AN N-DIMENSIONAL SPHERE 51
Now multiply both sides of the above equation by exp(−R2 )dR and integrate
over the variable R from 0 to ∞. We get,
Z ∞ Z ∞ Z +∞
2 2
V2 (R = 1) exp(−R )2RdR = exp(−R ) 2RdR dx1
0 0 −∞
Z 2
!
+∞ X
2
dx2 δ R − x2i . (4.37)
−∞ i=1
Z ∞ Z ∞ Z ∞
V2 (R = 1) dt exp(−t) = dx1 dx2 exp(−x21 − x22 ). (4.38)
0 −∞ −∞
Z +∞ 2
2
V2 (R = 1) × 1 = dx exp(−x ) . (4.39)
−∞
Z ∞ 2 Z ∞ 2
V2 (R = 1) = 2 dx exp(−x2 ) = dx x−1/2 exp(−x) . (4.40)
0 0
Z ∞ 2
V2 (R = 1) = x(1/2)−1 exp(−x)dx = [Γ(1/2)]2 = π. (4.41)
0
dxi = Rdyi ∀ i = 1, N;
" N
#! N
!
X X
2
Θ R 1− yi2 = Θ 1− yi2 .
i=1 i=1
52 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
We have,
Z Z Z N
!
+∞ +∞ +∞ X
VN (R) = RN dy1 dy2 · · · dyN Θ 1 − yi2 ; (4.43)
−∞ −∞ −∞ i=1
= VN (R = 1)RN . (4.44)
Differentiate both sides of the above expression with respect to R and get,
Z +∞ Z +∞
N −1
NVN (R = 1)R = dx1 dx2 · · ·
−∞ −∞
Z N
!
+∞ X
2
··· dxN δ R − x2i 2R. (4.46)
−∞ i=1
Now, multiply both sides by exp(−R2 )dR and integrate over R from 0 to ∞.
The Left Hand Side:
Z ∞
LHS = NVN (R = 1) dR exp(−R2 )RN −1 . (4.47)
0
N N
= VN (R = 1) Γ ,
2 2
N
= Γ +1 VN (R = 1). (4.48)
2
4.12. CLASSICAL COUNTING OF MICRO STATES 53
t = R2 ; dt = 2RdR,
Z ∞ Z +∞ Z +∞
RHS = dt exp(−t) dx1 dx2 · · ·
0 −∞ −∞
Z N
!
+∞ X
dxN δ t − x2i ,
−∞ i=1
Z +∞ Z +∞ Z +∞ h i
= dx1 dx2 · · · dxN exp −(x21 + x22 + · · · x2N ) ,
−∞ −∞ −∞
Z ∞ N
= dx exp(−x2 )
−∞
= π N/2 . (4.50)
Thus we get
π N/2
VN (R = 1) =
N
. (4.51)
Γ 2
+1
π N/2
VN (R) = RN . (4.52)
N
Γ 2
+1
b V N (2πmE)3N/2
Ω(E, V, N) = . (4.53)
h3N Γ 3N + 1
2
b
Let us take the partial derivative of Ω(E, V, N) with respect to E and get,
V N (2πm)3N/2 3N (3N/2)−1
g(E, V, N) = E . (4.56)
h3N Γ( 3N
2
+ 1) 2
Let us substitute N = 1 in the above and get the single particle density of
states, g(E, V ) as,
V π
g(E, V ) = (8m)3/2 E 1/2 . (4.57)
h3 4
law behaviour.
VN (R) − VN (R − ∆R) RN − (R − ∆R)N
= ,
VN (R) RN
!N
∆R
= 1− 1− = 1 for N → ∞. (4.58)
R
Consider the case with R = 1 and ∆R = 0.1. The percentage of the total
volume contained in the outermost shell of an N dimensional sphere for N =
1, 2, · · · 10 is given in the following table.
VN (R = 1) − VN (R = 0.9)
N × 100
VN (R = 1)
1 10.000%
2 19.000%
3 27.000%
4 34.000%
5 41.000%
6 47.000%
7 52.000%
8 57.000%
9 61.000%
10 65.000%
20 88.000%
40 99.000%
60 99.000%
80 99.980%
100 99.997%
Hence in the limit of N → ∞ the number of micro states with energy less
than or equal to E is nearly the same as the number of micro states with
energy between E − ∆E and E.
b V N 1 (2πmE)3N/2
Ω(E, V, N) = . (4.60)
h3N N! Γ 3N + 1
2
The above is called equi-partition theorem. Each quadratic term5 in the Hamil-
tonian carries an energy of kB T /2.
3N
X
5 p2i
For an ideal gas H = . There are 3N quadratic terms in the Hamiltonian.
i=1
2m
4.15. PROPERTIES OF AN IDEAL GAS 57
4.15.3 Pressure
The pressure of an isolated system of ideal gas as a function of E, V , and N,
is given by,
!
∂S P NkB
= = , (4.64)
∂V E,N
T V
NkB T
P = . (4.65)
V
Substituting in the above T as a function of E, V , and N, see Eq. (4.62), we
get,
2E
P = . (4.66)
3V
Therefore,
V 3 4πmE
µ = −kB T ln − kB T ln . (4.68)
N 2 3Nh2
Substituting in the above, the expression for T in terms of E and N, from Eq.
(4.62), we get micro canonical chemical potential6 ,
2E V E 4πmE
µ = − ln − ln . (4.69)
3N N N 3Nh2
6
Chemical potential for an isolated system; it is expressed as a function of E, V , and N
58 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
In the above expression for the micro canonical chemical potential, let us
substitute
E = 3NkB T /2
and get canonical chemical potential7 ,
!
V 3 2πmkB T
µ = −kB T ln − kB T ln ,
N 2 h2
V
= −kB T ln + 3kB T ln(Λ). (4.70)
N
where Λ is the thermal or quantum wavelength8 given by,
h
Λ = √ . (4.71)
2πmkB T
= kB T ln ρ + kB T ln Λ3 ,
= kB T ln(ρΛ3 ). (4.72)
• When the density of particles is small, and temperatures are high, then
ρΛ2 << 1. Note that the thermal wavelength Λ is inversely proportional
to (square root of ) temperature. Hence it is small at high temperatures.
Hence at high temperatures and/or low densities, the chemical potential
is negative and large.
7
chemical potential of a closed system : it is expressed as a function of T , V , and N .
8
Consider a particle with energy kB T . We have
p2 p
E = kB T = ; p2 = 2mkB T ; p = 2mkB T .
2m
The de Broglie wavelength associated a particle having momentum p is
h h
Λ = = √ .
p 2mkB T
4.16. QUANTUM COUNTING OF MICRO STATES 59
Figure (4.5) shows typical classical behaviour of chemical potential with change
of temperature.
0.6
0.4
0.2
µ 0
−0.2
−0.4
−0.6
−0.8
−1
0 0.5 1
T 1.5
Figure 4.5: µ versus T for a classical ideal gas. µ = −(3T /2) ln(T ). We have
set kB = h = ρ/(2πm)3/2 = 1
ρ
I have set kB = h = = 1 for producing the above graph. From
(2πm)3/2
the figure we observe that at high temperatures µ is negative; at low temper-
atures it is positive; as T decreases µ increases, reaches a maximum and then
decreases to zero at zero temperature.
b V (2πmǫ)3/2
Ω(ǫ, V) = 3 , (4.73)
h Γ ((3/2) + 1)
60 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
b 1 V N (2πmE)3N/2
Ω(E, V, N) = .
h3N N! 3N
Γ +1
2
gives the probability of finding the system in an elemental volume d~q around the point ~q at
time t. Since the system has to be somewhere, for, otherwise we would not be interested in
it, we have the normalization,
Z
ψ ⋆ (~q, t)ψ(~q, t)d~q = 1,
where the integral is taken over the entire coordinate space - each of the x, y and z coordi-
nates extending from −∞ to +∞.
A central problem in quantum mechanics is the calculation of ψ(~q, t) for the system of
interest. We shall be interested in the time independent wave function ψ(~q) describing a
stationary states of the system.
How do we get ψ(~ q) ?
Schrödinger gave a prescription : Solve the equation Hψ(~q) = Eψ(~q), with appropriate
boundary conditions.
We call this the time independent Schrödinger equation.
H is the Hamiltonian operator
~2 2
H = − ▽ +U (~q). (4.74)
2m
The first operator on the right is kinetic energy and the second the potential energy.
E in the Schrödinger equation is a scalar ... a real number... called energy. It is an
eigenvalue of the Hamiltonian operator : we call it energy eigenvalue.
The Schrödinger equation is a partial differential equation. Once we impose boundary
condition on the solution, then only certain discrete energies are permitted. We call these
energy eigenvalues.
Energy Eigenvalue by Solving Schrödinger Equation Once we specify boundary
conditions, then a knowledge of the Hamiltonian is sufficient to determine its eigenvalues
and the corresponding eigenfunctions. There will be usually several eigenvalues and corre-
sponding eigenfunctions for a given system.
4.16. QUANTUM COUNTING OF MICRO STATES 61
λ
L = n× : n = 1, 2, , · · · (4.77)
2
2L
λ = : n = 1, 2, , · · · (4.78)
n
Substitute the above in the de Broglie relation
h h
p= = n : n = 1, 2, · · · .
λ 2L
This yields
p2 h2
ǫn = = 2
n2 : n = 1, 2, · · · .
2m 8mL
Consider a particle in an L × L × L cube - a three dimensional infinite well.
The energy of the system is given by
h2
ǫnx ,ny ,nz = 2
(n2x + n2y + n2z ),
8mL
where nx = 1, 2, · · · , , ny = 1, 2, · · · , and nz = 1, 2 · · · , .
~2 ∂ 2
H = − . (4.75)
2m ∂x2
Solve the one dimensional Schrödinger equation with the boundary condition :
the wave function vanishes at the boundaries. Show that the energy eigenvalues
are given by
h2
ǫn = n2 ; n = 1, 2, · · · (4.76)
8mL2
62 4. ISOLATED SYSTEM: MICRO CANONICAL ENSEMBLE
The ground state is (nx , ny , nz ) = (1, 1, 1); it is non degenerate; the energy
eigenvalue is
3h2
ǫ1,1,1 = . (4.79)
8mL2
3h2
ǫ2,1,1 = ǫ1,2,1 = ǫ1,1,2 = . (4.80)
4mL2
We start with,
h2
ǫ = (n2 + n2y + n2z ).
8mL2 x
8mL2 ǫ
n2x + n2y + n2z = = R2 . (4.81)
h2
b
and take one-eighth of it. Let us denote this number by Ω(ǫ). We have,
!3/2
b 1 4 π 8mL2 ǫ
Ω(ǫ) = π R3 = . (4.82)
8 3 6 h2
4.17. CHEMICAL POTENTIAL : TOY MODEL 63
b V π V 4π
Ω(ǫ, V) = 3
(8mǫ)3/2 = 3 (2mǫ)3/2 ,
h 6 h 3
V 4π (2πmǫ)3/2 V (2πmǫ)3/2
= = √ ,
h3 3 π 3/2 h3 (3/2)(1/2) π
V (2πmǫ)3/2 V (2πmǫ)3/2
= = . (4.83)
h3 (3/2)(1/2)Γ(1/2) h3 Γ ((3/2) + 1)
The above is exactly the one we obtained by classical counting, see Eq. (4.73)
Notice that in quantum counting of micro states, the term h3 comes naturally,
while in classical counting it is hand-put10 .
b
The density of (energy) states is obtained by differentiating Ω(ǫ, V ) with
respect to the variable ǫ. We get
V π
g(ǫ, V ) = (8m)3/2 ǫ1/2 . (4.84)
h3 4
The important point is that the density of energy states is proportional to ǫ1/2 .
0 ǫ 2ǫ
A − B
B − A
− A, B −
Table 4.1: Three micro states of a two-particles system with total energy 2ǫ
Now add a particle, labelled C, such that the energy of the system does
b
not change. Let Ω(E = 2ǫ, N = 3) denote the number of micro states of the
three-particle system with a total energy of E = 2ǫ. Table (4.2) lists the micro
states.
0 ǫ 2ǫ
A, B − C
B, C − A
C, A − B
A B, C −
B C, A −
C A, B −
Table 4.2: Six micro states of a three-particles system with total energy 2ǫ
b
We find Ω(E = 2ǫ, N = 3) = 6. The entropy is given by
b
S(E = 2ǫ, N = 3) = kB ln Ω(E = 2ǫ, N = 3) = kB ln(6).
We find
S(E = 2ǫ, N = 3) > S(E = 2ǫ, N = 2).
Note that !
∂U
µ= .
∂N S,V
In other words, µ is the change in energy of the system when one particle
is added in such a way that the entropy and volume of the system remain
unchanged.
Let us now remove ǫ amount of energy from the three-particles system and
count the number of micro states. Table(4.3) shows that the number of micro
4.17. CHEMICAL POTENTIAL : TOY MODEL 65
0 ǫ
A, B C (4.85)
B, C A
C, A B
Table 4.3: Three micro states of a three-particles system with total energy ǫ
67
68 5. CLOSED SYSTEM : CANONICAL ENSEMBLE
Let P (k) denote the probability that the red die shows up k given the three
dice add to 6. Because of the condition imposed, the red die can be in any
one of the four states, {1, 2, 3, 4} only; and these four micro states are not
equally probable. The probabilities can be calculated as follows.
We find that for the three-dice system, there are 10 micro states with the
property that the three dice add to 6. These are listed in Table(1).
No. W R W No W R W
1 1 1 4 6 2 2 2
2 2 1 3 7 3 2 1
3 3 1 2 8 1 3 2
4 4 1 1 9 2 3 1
5 1 2 3 10 1 4 1
The ten micro states are equally probable. These are the micro states of
the universe - which consists of the system and its surroundings. The universe
constitutes an isolated system.
Of these ten micro states of the universe, there are four micro states for
which the system die shows up 1; therefore P (1) = 0.4. Similarly we can
calculate the other probabilities :
P (2) = 0.3; P (3) = 0.2; P (4) = 0.1; P (5) = P (6) = 0.0
The important point is that the micro states of the system are not equally
probable.
From the above toy model, we can say that if we consider the system and its
surroundings together to constitute the universe and demand that the universe
has a fixed energy, then the system will not be in its micro states with equal
probability.
What is the probability of a micro state of a closed system ? We shall
calculate the probability in the next section employing two different methods.
The first involves Taylor expansion of S(E). I learnt of this, from the book of
Balescu4 . The second is based on the method of most probable distribution,
described in several books5 .
4
R Balescu, Equilibrium and non-equilibrium statistical mechanics, Wiley (1975).
5
see e.g. R. K. Pathria, Statistical Mechanics, Second Edition Butterworth Heinemann
(2001)p.45
5.3. CANONICAL PARTITION FUNCTION 69
where, β = 1/[kB T ], and the sum runs over all the micro states of a closed
b
system at temperature T , volume V and number of particles N. Let Ω(E, V, N)
b
denote the density of (energy) states. In other words Ω(E, V, N)dE is the
number of micro states having energy between E and E + dE. The canonical
partition function, see Eq. (5.8), can be written as an integral over energy,
Z ∞
Q(β, V, N) = b
dE Ω(E, V, N) exp [−βE(V, N)] . (5.9)
0
In the above replace the integral over E by the value of the integrand, evaluated
at E = hEi = U . We get,
b
Q = Ω(E b
= U, V, N) exp(−βU) ; ln Q = ln Ω(U, V, N) − βU ,
b
−kB T ln Q = U − T kB ln Ω(U, V, N)
7
Legendre Transform : Start with U (S, V, N ). Concentrate on the dependence of U on
S. Can this dependence be described in an alternate but equivalent way ?
Take the slope of the curve U (S) at S. We have
∂U
T (S, V, N ) = .
∂S V,N
We can plot T against S; but then it will not uniquely correspond to the given curve U (S).
All parallel curves in the U -S plane shall lead to the same T (S). It is the intercept that will
tell one curve from the other, in the family. Let us denote the intercept by the symbol F .
We have
U (S, V, N ) − F ∂U
= T ; F (T, V, N ) = U (S, V, N ) − T S; T = .
S ∂S V,N
The equation for micro canonical temperature, is inverted and entropy is expressed as a func-
tion of T , V , and N . Employing the function S(T, V, N ) we get U (T, V, N ) and F (T, V, N ).
The above is called Legendre transform. S transforms to ’slope’ T and U (S) transforms to
the intercept F (T ). We call F (T, V, N ) as Helmholtz free energy.
Example: Consider U expressed as a function of S, V , and N :
S3
U (S, V, N ) = α .
NV
72 5. CLOSED SYSTEM : CANONICAL ENSEMBLE
ENTHALPY : H(S, P, N ). Start with U (S, V, N ). Carry out the Legendre transform of
V → −P and U (S, V, N ) → H(S, P, N ).
∂U
H(S, P, N ) = U + P V ; P = .
∂V S,N
We have,
P
i Ei exp(−βEi )
hEi = P ,
i exp(−βEi )
1 X
= Ei exp(−βEi ) ,
Q i
1 ∂Q ∂ ln Q
= − =− . (5.13)
Q ∂β ∂β
We identify hEi with the internal energy, usually denoted by the symbol U in
thermodynamics.
We have,
1 ∂Q
U = − , (5.14)
Q ∂β
! !2
∂U 1 ∂2Q 1 ∂Q h i
= − 2
+ = − hE 2 i − hEi2 = −σE2 . (5.15)
∂β V
Q ∂β Q ∂β
Now write,
∂U ∂U ∂T
= × = CV (−kB T 2 ) . (5.16)
∂β ∂T ∂β
We get the relation between the fluctuations of energy of an equilibrium system
and the reversible heat required to raise the temperature of the system by one
degree Kelvin :
σE2 = kB T 2 CV . (5.17)
74 5. CLOSED SYSTEM : CANONICAL ENSEMBLE
The left hand side of the above equation represents the fluctuations of energy
when the system is in equilibrium. The right hand side is about how the system
would respond when you heat it8 . Note CV is the amount of reversible heat
you have to supply to the system at constant volume to raise its temperature
by one degree Kelvin. The equilibrium fluctuations in energy are related to
the linear response; i.e. the response of the system to small perturbation9 .
b = V N 1 (2πmE)3N/2
Ω . (5.19)
N! h3N Γ( 3N
2
+ 1)
Therefore the density of (energy) states is given by,
b
∂Ω V N 1 (2πm)3N/2 3N 3N −1
g(E) = = E 2 ,
∂E N! h3N Γ( 3N
2
+ 1) 2
V N 1 (2πm)3N/2 3N −1
= E 2 (5.20)
N! h3N Γ( 3N
2
)
where we have made use of the relation
3N 3N 3N
Γ +1 = Γ . (5.21)
2 2 2
The partition function is obtained as a "transform" of the density of states
where the variable E transformed to the variable β.
Z
V N 1 (2πm)3N/2 ∞ 3N
−1
Q(β, V, N) = dE exp(−β E) E 2 . (5.22)
N! h3N Γ 3N 0
2
8
Notice that σ 2 is expressed in units of Joule2 . The quantity kB T 2 is expressed in units
of Joule-Kelvin. CV is in Joule/Kelvin. Thus kB T 2 CV has units of Joule2 .
9
first order perturbation.
5.7. METHOD OF MOST PROBABLE DISTRIBUTION 75
Let,
Γ( 3N
2
)
= . (5.26)
β 3N/2
Substituting the above in the expression for the partition function we get,
VN 1 3N/2 VN 1
Q(T, V, N) = (2πmk B T ) = , (5.27)
N! h3N N! Λ3N
where
h
Λ = √ . (5.28)
2πmkB T
First check that Λ has the dimension of length. Λ is called thermal wave-
length or quantum wavelength. It is approximately the√de Broglie wavelength
of a particle with energy p2 /2m = kB T . We have p = 2mkB T . Therefore
h h h
Λ= =√ ≈√ .
p 2mkB T 2πmkB T
A wave packet can be constructed by superimposing plane waves of wave-
lengths in the neighbourhood of Λ. Thus a wave is spatially localized in a
volume of the order of Λ3 . Consider a gas with number density ρ. The inter
particle distance is of the order of ρ1/3 . Consider the situation ρΛ3 << 1. The
particles are far apart. The wave packets do not overlap. Classical description
will suffice. Quantum effects manifest when ρΛ3 >> 1 : when density is
large and temperature is low.
Aim :
To find the probability for the closed system to be in its micro state i.
First, we list down all the micro states of the system. Let us denote the
micro states as {1, 2, · · · }. Note that the macroscopic properties T , V , and
N are the same for all the micro states. In fact the system switches from one
micro state to another, incessantly. Let Ei denote the energy of the system
when it is in micro state i. The energy can vary from one micro state to
another.
To each cube, we can attach an index i. The index i denotes the micro
state of the closed system with fixed T , V and N. An ordered set of A indices
uniquely specifies a micro state of the universe.
Let us take an example. Let the micro states of the closed system be
denoted by the indices {1, 2, 3}. There are only three micro states. Let us
represent the isolated system by a big square and construct nine small squares,
each of which represents a member of the ensemble. Each square is attached
with an index which can be 1, 2 or 3. Thus we have a micro state of the
universe represented by
3 1 2
2 3 3
2 3 1
In the above micro state, there are two squares with index 1, three with
index 2 and four with index 3. Let {a1 = 2, a2 = 3, a3 = 4} be the occupation
number string of the micro state. There are several micro states having the
same occupation number string. I have given below a few of them.
Notice that all the micro states given above have the same occupation
number string {2, 3, 4}. How many micro states are there with this occupation
number string ? We have
b 9!
Ω(2, 3, 4) = = 1260 (5.29)
2!3!4!
V and N . Also other macroscopic properties defined as averages of a stochastic variable
e.g. energy are also the same for all the mental copies. But these mental copies will often
differ, one from the other, in their microscopic properties.
78 5. CLOSED SYSTEM : CANONICAL ENSEMBLE
1 3 3 2 3 3 1 2 1 1 1 2 1 2 3
2 1 2 3 1 2 2 3 2 2 2 3 1 2 3
3 2 3 3 2 1 2 3 2 3 3 3 2 3 3
Table 5.2: A few micro states with the same occupation number representation
of (2, 3, 4). There are 1260 micro states with the same occupation number
representation
I am not going to list all the 1260 of the microstates belonging to the occupa-
tion number string {2, 3, 4}
Let me generalize and say that a string of occupation numbers is denoted
by the symbol ã = {a1 , a2 , · · · }, where a1 + a2 + · · · = A. We also have an
additional constraint namely a1 E1 + a2 E2 + · · · = E.
b b
Let Ω(ã) = Ω(a1 , a2 , · · · ) denote the number of micro states of the universe
belonging to the string ã. For a given string, we can define the probability for
the closed system to be in its micro state indexed by i as
ai (ã)
pi (ã) = (5.30)
A
Note, the string ã = {a1 , a2 · · · } obeys the following constraints.
A
X
ai (ã) = A ∀ strings ã (5.31)
i=1
A
X
ai (ã)Ei = E ∀ strings ã (5.32)
i=1
Note that the value of pi varies from one string to another. It is reasonable
to obtain the average value of pi over all possible strings ã. We have
!
X ai (ã)
Pi = P(ã) (5.33)
ã A
where
b
Ω(ã)
P(ã) = P b . (5.34)
ã Ω(ã)
We are able to write the above because all the elements of an an ensemble
have the same probability given by inverse of the size of the ensemble. In the
5.8. LAGRANGE AND HIS METHOD 79
above,
b A!
Ω(ã) = , (5.35)
a1 !a2 ! · · ·
ai (ã⋆ )
= ,
A
a⋆
= i (5.36)
A
b
Thus the problem reduces to finding that string ã⋆ for which Ω(ã) is a maxi-
mum. Of course there are two constraints on the string. They are
X
aj (ã) = A ∀ ã; (5.37)
j
X
aj (ã)Ej = E ∀ ã. (5.38)
j
We write
∂h ∂h
dh = dx + dy = 0 (5.39)
∂x ∂y
∂h
= 0 (5.40)
∂x
∂h
= 0 (5.41)
∂y
We have two equations and two unknowns. In principle we can solve the above
two equations and obtain (x⋆ , y ⋆) at which h is maximum.
Now imagine there is a road on the mountain which does not necessarily
pass through the peak of the mountain. If you are travelling on the road, then
what is the highest point you will pass through ? In the equation
∂h ∂h
dh = dx + dy = 0 (5.42)
∂x ∂y
the infinitesimals dx and dy are not independent. You can choose only one
of them independently. The other is determined by the constraint which says
that you have to be on the road.
Let the projection of the mountain-road on the plane be described by the
curve
g(x, y) = 0.
∂g ∂g
dx + dy = 0 (5.43)
∂x ∂y
We then have,
∂h ∂h
dh = dx + dy = 0 (5.45)
∂x ∂y
∂g
∂h ∂h ∂x
= dx + − ∂g dx = 0, (5.46)
∂x ∂y
∂y
∂h
∂h ∂y ∂g
= − dx = 0. (5.47)
∂x ∂g ∂x
∂y
We start with
N
X ∂f
df = dxi = 0 (5.52)
i=1 ∂xi
for maximum. In the set {dx1 , dx2 , · · · dxµ , · · · dxN }, not all are independent.
They are related by the constraint
N
X ∂g
dxi = 0 (5.53)
i=1 ∂xi
where
!
∂f
∂xµ
λ = ! (5.56)
∂g
∂xµ
There are only N − 1 values of dxi . We have eliminated dxµ . Instead we
have the undetermined multiplier λ. Since dxi : i = 1, N and i 6= µ are all
independent of each other we can set each term in the sum to zero. Therefore
∂f ∂g
−λ = 0 ∀ i 6= µ (5.57)
∂xi ∂xi
From the definition of λ we get
∂f ∂g
−λ =0 (5.58)
∂xµ ∂xµ
Thus we have a set of N equations
∂f ∂g
−λ = 0 ∀ i = 1, N (5.59)
∂xi ∂xi
5.10. DERIVATION OF BOLTZMANN WEIGHT 83
There are N equations and N unknowns. In principle we can solve the equation
and get
x⋆i ≡ x⋆i (λ) ∀ i = 1, N,
where the function h is maximum under constraint
g(x1 , x2 , · · · xN ) = 0.
The value of the undetermined multiplier λ can be obtained by substituting
the solution in the constraint equation.
If we have more than one constraints we introduce separate Lagrange mul-
tipliers for each constraint. Let there be m ≤ N constraints. Let these con-
straints be given by
gi (x1 , x2 , · · · xN ) = 0 ∀i = 1, m.
We introduce m number of Lagrange multipliers, λi : i = 1, m and write
∂f ∂g1 ∂g2 ∂gm
− λ1 − λ2 · · · − λm = 0 ∀ i = 1, N (5.60)
∂xi ∂xi ∂xi ∂xi
where the m ≤ N.
b A!
Ω(ã) = (5.62)
a1 !a2 ! · · ·
The two constraints are
X
aj (ã) = A (5.63)
j
X
aj (ã)Ej = E (5.64)
j
84 5. CLOSED SYSTEM : CANONICAL ENSEMBLE
b
We extremize entropy given by ln Ω(ã).
X X
b
ln Ω(a1 , a2 , · · · ) = ln A! − aj ln aj + aj (5.65)
j j
exp(−βEi )
pi = P (5.71)
i exp(−βEi )
P
X i Ei exp(−βEi )
hEi = Ei pi = P (5.72)
i i exp(−βEi )
b A!
Ω(a1 , a2 , · · · ) = (5.73)
a1 !a2 ! · · ·
b For convenience
The aim is to find {a⋆i : i = 1, 2, · · · } that maximizes Ω.
we maximize ln Ω. b Let us consider the limit a → ∞ ∀ i. Also consider the
i
variables pi = ai /A. Then
X X
b = A ln A − A −
ln Ω ai ln ai + ai (5.74)
i i
X X
= ai ln A − ai ln ai (5.75)
i
X
ai
= − ai ln (5.76)
X
A
= −A pi ln pi (5.77)
i
b
ln Ω X
= − pi ln pi (5.78)
A i
The above is the entropy of one of the A number of closed systems constituting
P
the the universe. Thus, − i pi ln pi provides a natural formula for the entropy
of a system whose micro states are not equi-probable.
Physicists would prefer to measure entropy in units of Joules per Kelvin.
For, that is what they have learnt from Clausius, who defined
q
dS = ,
T
5.13. MICROSCOPIC VIEW : HEAT AND WORK 87
This expression for entropy is natural for a closed system described by a canon-
ical ensemble. We call it Boltzmann-Gibbs-Shannon entropy.
where the superscript ⋆ in the second sum should remind us that all dpi s are
not independent and that they should add to zero.
In the first sum we change Ei by dEi ∀ i keeping pi ∀ i unchanged.
In the second sum we change pi by dpi ∀ i keeping Ei unchanged for all i
P
and ensuring i dpi = 0.
X
5.13.1 Work in Statistical Mechanics : W = pi dEi
i
We have
X X ∂Ei
pi dEi = pi dV
i i ∂V
P
∂( pi Ei )
i
= dV
∂V
∂hEi
= dV
∂V
∂U
= dV
∂V
= −P dV = W (5.81)
X
5.13.2 Heat in Statistical Mechanics : q = Ei dpi
i
Method-1
We start with the first term on the right hand side of Eq. (5.80).
X
S = −kB pi ln pi , (5.82)
i
X⋆
dS = −kB [1 + ln pi ] dpi , (5.83)
i
X⋆
T dS = d¯q = −kB T ln pi dpi , (5.84)
i
X⋆ e−βEi
= −kB T [−βEi − ln Q] dpi since pi = (5.85)
i Q
X⋆
= Ei dpi (5.86)
i
5.14. ADIABATIC PROCESS - A MICROSCOPIC VIEW 89
Method-2
Alternately, we recognize that change of entropy is brought about by change
of probabilities of microstates and vice versa, keeping their respective energies
the same. Thus,
X X ∂pi
Ei dpi = Ei dS (5.87)
i i ∂S
P
∂ ( i Ei pi ) ∂hEi ∂U
= dS = dS = dS = T dS = q (5.88)
∂S ∂S ∂S
1
ǫi ∼ (5.89)
L2
∼ V −2/3 (5.90)
∂ǫi
∼ V −5/3 (5.91)
∂V
X ∂ǫi
dU = pi dV (5.92)
i ∂V
−P dV ∼ V −5/3 dV (5.93)
Z " #
+∞ β 2
··· dp3N exp − p1 + p22 · · · + p23N (5.95)
−∞ 2m
"Z !#3N
VN 1 +∞ β 2
= dp exp − p (5.96)
N! h3N −∞ 2m
"Z !#3N
VN 1 +∞ 1 p2
= dp exp − (5.97)
N! h3N −∞ 2 mkB T
Consider the integral
Z !
+∞ 1 p2
I = dp exp − (5.98)
−∞ 2 mkB T
since the integrand is an even function, we can write the above as
Z !
+∞ 1 p2
I = 2 dp exp − (5.99)
0 2 mkB T
Let
p2
x = (5.100)
2mkB T
Therefore,
p
dx = dp (5.101)
mkB T
s
mkB T 1
dp = dx (5.102)
2 x1/2
5.15. IDEAL GAS 91
VN 1
Q(β, V, N) = 3N
(2πmkB T )3N/2 (5.104)
N! h
VN 1
Q(T, V, N) = (5.105)
N! Λ3N
h
Λ(T ) = √ . (5.106)
2πmkB T
5.15.1 Energy
V
ln Q = N ln + N − 3N ln(Λ) (5.107)
N
∂ ln Q ∂ ln Λ
−hEi = = −3N (5.108)
∂β ∂β
√
ln Λ = ln(h/ 2πm) + (1/2) ln β (5.109)
3NkB T
U = hEi = (5.110)
2
5.15.3 Entropy
The entropy of an ideal gas is obtained from the following relation:
U −F
S = (5.112)
T
where,
3NkB T
U = (5.113)
2
F = −kB T ln Q (5.114)
We get,
5NkB V
S(Λ, V, N) = + NkB ln − 3NkB ln Λ (5.115)
2 N
Substituting in the above the expression for Λ we get,
" ! #
V 3 2πmkB T 5
S(T, V, N) = NkB ln + ln + (5.116)
N 2 h2 2
" #
dV 3 dT
= NkB + (5.118)
V 2 T
V T 3/2 = Θ1 (5.121)
T V 2/3 = Θ2 (5.122)
• A heat bath transacts energy with the system. The temperature of the
heat bath does not change because of the transaction.
• A material (or particle) bath transacts matter (or particles) with the
system. The chemical potential of the material bath does not change
because of the transaction.
95
96 6. OPEN SYSTEM : GRAND CANONICAL ENSEMBLE
system is not isolated, its micro states are not equi-probable. Our aim is to
calculate the probability of a micro state of the open system.
Let us take the open system, its boundary and surroundings and construct a
universe, which constitutes an isolated system. We are interested in construct-
ing an isolated system because, we want to start with the only assumption we
make in statistical mechanics : all the micro states of an isolated system are
equally probable. We have called this the ergodic hypothesis.
Let E denote the total energy of the universe. E >> E, where E is the
(average or typical) energy of the open system under study.
Let N denote the total number of particles in the universe. N >> N,
where N is the (average or typical) number of particles in the open system.
V is the total volume of the universe. The universe is characterized by E, and
N ; these are held at constant values.
We want to describe the open system in terms of its own micro states. Let
C be a micro state of the open system. Let
• E(C) be the energy of the open system when it is in its micro state C
and
• let N(C) be the number of particles in the open system when it is in its
micro state C.
When the open system is in micro state C the universe can be in any one
of its innumerable micro states3 such that the energy of the surroundings is
E − E(C) and the number of particles in the surroundings is N − N(C).
Let
ΩC = Ω(E − E(C), N − N(C)),
denote the subset of micro states of the universe having common property
described below.
When the universe is in a micro state belonging to ΩC , then the
system is in the chosen micro state C and the surroundings can be
3
The picture I have is the following. I am visualizing a micro state of the isolated system
as consisting of two parts. One part holds the signature of the open system; the other holds
the signature of the surroundings. For example a string of positions and momenta of all
the particles in the isolated system defines a micro state. This string consists of two parts.
The first part s contains the string of positions and momenta of all the particle in the open
system and the second part contains the positions and momenta of all the particles in the
surroundings. Since the system is open the length system-string is a fluctuating quantity
and so is the length of bath-string. However the string of the isolated system is of fixed
length. I am neglecting those micro states of the isolated system which hold the signature
of the interaction between the system and the surroundings at the boundaries.
6.1. WHAT IS AN OPEN SYSTEM ? 97
dE = T dS − P dV + µdN (6.3)
1 P µ
dS = dE + dV − dN (6.4)
T T T
We have
S ≡ S(E, V, N) (6.5)
! ! !
∂S ∂S ∂S
dS = dE + dV + dN (6.6)
∂E V,N
∂V E,N
∂N E,V
Comparing the coefficients of dE, dV and dN, in equations (6.4) and (6.6),
we get,
! ! !
∂S 1 ∂S P ∂S µ
= ; = ; =− (6.7)
∂E V,N
T ∂V E,N
T ∂N E,V
T
Therefore,
! !
1 µ
S E − E(C), N − N(C) = S E, N − E(C) + N(C) (6.8)
T T
98 6. OPEN SYSTEM : GRAND CANONICAL ENSEMBLE
We are able to write the above because of the postulate of ergodicity : All
micro states of the universe - an isolated system, are equally probable. We
have,
" !#
1 1
P (C) = b
exp S E − E(C), N − N(C)
Ω Total kB
" #
1 S(E, N ) E(C) µN(C)
= b
exp − +
Ω Total kB kB T kB T
In
Xthe above, the constant α can be determined by the normalization condition,
P (C) = 1, where the sum runs over all the micro states of the open system.
c
We have,
1
P (C) = exp (−β [E(C) − µN(C)]) (6.11)
Q
where the grand canonical partition function is given by
X
Q(T, V, µ) = exp (−β [E(C) − µN(C)]) (6.12)
C
The fugacity is given by λ = exp(βµ); then we can write the grand canonical
partition function as,
X
Q(T, V, λ) = λN (C) exp[−βE(C)] (6.13)
C
• Collect all the micro states of a grand canonical ensemble with a fixed
energy. and fixed number of particles. Then these micro states constitute
a microcanonical ensemble.
QN
1
Q(T, V, N) = QN (T, V ) = . (6.15)
N!
Therefore,
∞
X λN QN
1
Q(T, V, µ) = = exp(λQ1 ) (6.16)
N =0 N!
G = −P V = −kB T ln Q.
6.2.1 G = −kB T ln Q
Method-1
We follow the same method we employed for establishing the connection
between Helmholtz free energy and canonical partition function. We
have,
X
Q(T, V, λ) = exp [−β (Ei − µNi )] (6.19)
i
where the sum runs over all the micro states i of the open system. Ei is
the energy of the system when in micro state i, and Ni is the number of
particles in the system when in its micro state i.
We replace the sum over micro states by sum over energy and number
of particles. Let g(E, N) denote the density of states. We have then,
Z Z
Q(T, V, µ) = dE dNg(E, N) exp[−β(E − µN)] (6.20)
= E − T S − µN = G (6.23)
Method-2
P ′
Alternately, we start with Q = N ′ λN Q(T, V, N ′ ) In principle, the
number of particles in an equilibrium open system is not a fixed number.
It fluctuates from one micro state to another. However the fluctuations
are very small; it can be shown that the relative fluctuations are inversely
proportional to the size of the system. In the above expression for Q,
only one term contributes overwhelmingly to the sum over N ′ . Let the
′
value of N ′ for which the λN Q(T, V, N ′ ) is maximum be N. Hence the
sum over N ′ can be replaced by a single entry with N ′ = N.
Q(T, V, µ) = λN Q(T, V, N) (6.24)
µN
= + ln Q(T, V, N) (6.25)
kB T
kB T ln Q = µN + kB T ln Q(T, V, N) (6.26)
While studying Canonical ensembles we have shown that
F (T, V, N) = −kB T ln Q(T, V, N).
Therefore we can write the above equation as,
kB T ln Q = µN − F (T, V, N) (6.27)
−kB T ln Q = F − µN = U − T S − µN = G (6.28)
Recall the discussions on Legendre Transform : We start with U ≡ U(S, V, N).
Transform S in favour of the "slope" T (partial derivative of U with respect
to S). We get the "intercept" F (T, V, N) as U − T S. Let us carry out one
more transform : N → µ. i.e. transform N in favour of the slope µ (partial
derivative of U with respect to N); µ is the chemical potential. We get the
"intercept" G(T, V, µ) - the grand potential. We have G(T, V, µ) = U −T S−µN.
Thus we have, G(T, V, µ) = −kB T ln Q(T, V, µ)
102 6. OPEN SYSTEM : GRAND CANONICAL ENSEMBLE
∂U ∂U ∂U
S +V +N = U(S, V, N) (6.30)
∂(λS) ∂(λV ) ∂(λN)
∂U ∂U ∂U
S +V +N = U(S, V, N) (6.31)
∂S ∂V ∂N
T S − P V + µN = U (6.32)
5
Donald McQuairrie, Statistical Mechanics, Harper and Row (1976)
6
do not confuse λ here with fugacity introduced earlier. Unfortunately I have used the
same symbol. You should understand the meaning of λ in the context it is used.
6.3. STATISTICS OF NUMBER OF PARTICLES 103
PV = kB T ln Q(T, V, µ) (6.33)
U = T S − P V + µN, (6.34)
dU = T dS − P dV + µdN + SdT − V dP + Ndµ (6.35)
Thus the average number of particles in an open system can be formally ex-
pressed as,
1 ∂Q ∂ ln Q
hNi = λ =λ (6.38)
Q ∂λ ∂λ
ζN
P (N) thus is a Poisson distribution : P (N) = exp(−ζ. For Poisson distri-
N!
bution, the mean equals variance. Therefore
2
σN = hN 2 i − hNi2 = hNi = ζ = λ Q1 . (6.43)
In the above
E(C) denotes the energy of the open system when in micro state C
N(C) denotes the number of particles of the open when in micro state
C
Let us now take the partial derivative of all the terms in the above equation,
with respect to the variable µ, keeping the temperature and volume constant.
We have,
! " #
∂Q X
= βN(C) exp − β{E(C) − µN(C)}
∂µ T,V c
= βhNiQ(T, V, µ) (6.45)
! ! !
∂2Q ∂Q ∂hNi
= β hNi +Q (6.46)
∂µ2 T,V
∂µ T,V
∂µ T,V
7
The left hand side of the above equation equals β 2 hN 2 iQ.
X
∂Q
= βN (C) exp − β{E(C) − µN (C)}
∂µ T,V C
2
X
∂ Q 2 2
= β [N (C)] exp − β{E(C) − µN (C)}
∂µ2 T,V C
= β 2 hN 2 iQ.
106 6. OPEN SYSTEM : GRAND CANONICAL ENSEMBLE
!
2 2 2 ∂hNi
σ = hN i − hNi = kB T (6.48)
∂µ T,V
∂hNi
Substituting in the above = λβQ1 , we get,
∂µ
2
σN = kB T Q1 βλ (6.51)
= λQ1 = hNi (6.52)
!
V ∂v
= − 2 (6.56)
v ∂µ T,V
!
hNi2 ∂v
= − (6.57)
V ∂µ T,V
Therefore,
! ! !
∂v hNi ∂v 1 ∂v
= = = −kT (6.61)
∂µ T,V
V ∂P T,V
v ∂P T,V
9
Gibbs - Duhem relation reads as hN idµ = V dP − SdT. At constant temperature, we
have,
hN idµ = V dP,
∂P hN i
= (6.59)
∂µ T V
108 6. OPEN SYSTEM : GRAND CANONICAL ENSEMBLE
Finally we get,
!
∂hNi hNi2
= kT (6.62)
∂µ T,V
V
!
2 ∂hNi
σ = kB T (6.63)
∂µ T,V
hNi2
= kB T kT (6.64)
V
σ2 kB T
2
= kT (6.65)
hNi V
10
The relative fluctuations of energy in a canonical ensemble is proportional to heat ca-
pacity at constant volume. Hence we expect heat capacity to be positive .
6.3. STATISTICS OF NUMBER OF PARTICLES 109
2
Alternate Derivation of the Relation : σN /hNi2 = kB T kT /V
In an earlier lecture I had derived a relation between the number fluctuation
and isothermal compressibility. I had followed closely R K Pathria, Statistical
Mechanics, Second Edition, Butterworth and Henemann (1996). I am giving
below, an alternate derivation.
Start with a fluctuation-dissipation relation 11 derived earlier,
!
2 ∂hNi
σ = kB T (6.66)
∂µ V,T
!
∂µ V
= (6.71)
∂P T,V
hNi
hNi
Let us now define ρ = , which denotes the particle density : number of
V
particles per unit volume. We have,
! ! !
∂P ∂P 1 ∂P
= = (6.72)
∂hNi V,T
∂(ρV ) V,T
V ∂ρ V,T
11
relating number fluctuations to response to small changes in the chemical potential.
12
Gibbs and Duhem told us that the three intensive properties T , P , and µ are not all
independent. Only two of them are independent. The third is automatically fixed by the
other two.
110 6. OPEN SYSTEM : GRAND CANONICAL ENSEMBLE
The density can be changed either by changing hNi and/or V . Here we change
ρ infinitesimally, keeping V constant. As a result the pressure changes in-
finitesimally. Let me repeat : both these changes happen at constant V and
T.
As far as P is concerned, it does not care whether ρ has changed by change
of hNi or of V . Hence it is legitimate to write
! !
∂P ∂P
= (6.73)
∂ρ V,T
∂(hNi/V ) hN i,T
!
V2 ∂P
= − (6.74)
hNi ∂V hN i,T
Thus we get,
! !
∂µ V ∂P
=
∂hNi V,T
hNi ∂hNi V,T
!
1 ∂P
=
hNi ∂ρ V,T
!
V2 ∂P
= − (6.75)
hNi2 ∂V hN i,T
Then we get,
" # !
2 hNi2 ∂V
σN = kB T − 2
V ∂P T,hN i
hNi2
= kB T kT
V
2
σN kB T
2
= kT (6.77)
hNi V
Quantum Statistics
7
7.1 Occupation Number Representation
Let {1, 2, · · · } label the single-particle quantum states of a system.
Let {ǫi : i = 1, 2, · · · } denote the corresponding energies.
Notice that there can be several quantum states with the same energy.
We have N non-interacting quantum particles occupying the single par-
ticle quantum states. A micro state of the macroscopic system of N (quan-
tum) particles in a volume V is uniquely specified by a string of numbers
{n1 , n2 , · · · , ni , · · · }, where ni is the number of particles in the single- particle
quantum state i. Thus, a string of occupation numbers uniquely describes a
micro state of a quantum system1 . Note that such a string should obey the
constraint : n1 + n2 + · · · = N. The energy of a micro state is n1 ǫ1 + n2 ǫ2 + · · · .
The canonical partition function can now be written as
X⋆
Q(T, V, N) = exp [−β(n1 ǫ1 + n2 ǫ2 + · · · )] (7.1)
{n1 ,n2 ,··· }
where the sum runs over all possible micro states i.e. all possible strings of
P
occupation numbers obeying the constraint i ni = N. To remind us of this
constraint I have put a star as a superscript to the summation sign.
1
for
Q classical particles, the same string of occupation numbers represent a collection of
N !/ i ni ! micro states.
111
112 7. QUANTUM STATISTICS
2
we have already seen that a partition function is a transform. We start with density
b
of states Ω(E) and transform the variable E in favour of β = 1/[kB T ] to get canonical
partition function:
Z ∞
Q(β, V, N ) = b
dE Ω(E, V, N ) exp(−βE(V, N )).
0
We can consider the grand canonical partition function as a transform of the canonical
partition function with the variable N transformed to fugacity λ (or chemical potential
µ = kB T ln(λ).)
7.2. OPEN SYSTEM AND Q(T, V, µ) 113
Let n1 and n2 denote two integers. Each of them can take values 0, 1, or 2. Consider
first the restricted sum :
∞
X X⋆
I1 = xn1 1 xn2 2 (7.4)
N =0 {n1 ,n2 }
where the star over the summation sign reminds us of the restriction n1 + n2 = N .
Thus we get Table (1).
N n1 n2 xn1 1 xn2 2
0 0 0 1
1 1 0 x1
0 1 x2
2 2 0 x21
0 2 x22
1 1 x1 x2
3 3 0 -
2 1 x21 x2
1 2 x1 x22
0 3 -
- - -
2 2 x21 x22
- - -
5 - - -
Table 1. Terms in the restricted sum
114 7. QUANTUM STATISTICS
b N!
Ω(n1 , n2 , · · · ) = (7.10)
n1 !n2 ! · · ·
7.2. OPEN SYSTEM AND Q(T, V, µ) 115
We then have
∞
X X⋆ N!
QCS (T, V, µ) = λN [exp(−βǫ1 )]n1 ×
N =0 {n1 ,n2 ··· }
n1 !n2 ! · · ·
[exp(−βǫ2 )]n2 × · · ·
∞
" #N ∞
X X X
= λ N
exp(−βǫi ) = λN [Q1 (T, V )]N (7.11)
N =0 i N =0
We have already seen that the partition function for classical distinguish-
able non-interacting point particles leads to an entropy which is not extensive.
This is called Gibbs’ paradox. To take care of this, Boltzmann asked us to
divide the number of micro states by N!, saying that the particles are indis-
tinguishable.
Formally we have,
∞
X X⋆ 1
QM B = λN exp[−β(n1 ǫ1 + n2 ǫ2 + · · · )]
N =0 {n1 ,n2 ,··· }
n1 !n2 ! · · ·
∞
X X⋆ [λ exp(−βǫ1 )]n1 [λ exp(−βǫ2 )]n2
= × × ···
N =0 {n1 ,n2 ,··· }
n1 ! n2 !
∞
X X⋆ [exp(−β(ǫ1 − µ)]n1 [exp(−β(ǫ2 − µ)]n2
= × × ···
N =0 {n1 ,n2 ,··· }
n1 ! n2 !
∞
X X⋆ xn1 1 xn2 2
= ···
N =0 {n1 ,n2 ,··· }
n1 ! n2 !
∞
! ∞
!
X xn1 X xn2 Y
= ··· = exp(xi )
n=0
n! n=0
n! i
Y Y
= exp[λ exp(−βǫi )] = exp[exp{−β(ǫi − µ)}] (7.12)
i i
We can also express the grand canonical partition function for classical indis-
tinguishable ideal gas as,
X
QM B = exp(x1 ) exp(x2 ) · · · = exp xi
i
X
= exp exp[−β(ǫi − µ)]
i
X
= exp[λ exp(−βǫi )]
i
QM B (T, V, N) → QM B (T, V, µ)
We could have obtained the above in a simple manner, by recognising that
Then we have,
∞
X [QM B (T, V, N = 1)]N
QM B (T, V, λ) = λN
N =0 N!
QM B (T, V, µ) → QM B (T, V, N)
We start with QM B (T, V, µ) = exp[λQM B (T, V, N = 1)]. Taylor expanding
the exponential,
∞
X QN
M B (T, V, N = 1)
QM B (T, V, µ) = λN (7.16)
N =0 N!
G(T, V, µ) = U − T S − µN (7.19)
! !
∂U ∂U
T = ; µ= (7.20)
∂S V,N
∂N S,V
! ! !
∂G ∂G ∂G
P = − ; S=− ; N =− (7.21)
∂V T,µ
∂T V,µ
∂µ T,µ
X
−kB T exp[−β(ǫi − µ)] Maxwell − Boltzmann
i
X
G(T, V, µ) = −kB T ln Q = kB T ln [1 − exp {−β(ǫi − µ)}] Bose − Einstein (7.22)
i
X
−kB T ln [1 + exp {−β(ǫi − µ)}] Fermi − Dirac
i
7.4. AVERAGE NUMBER OF PARTICLES, hNi 119
Y
Q(T, V, µ) = exp [exp {−β (ǫi − µ)}} (7.23)
i
G(T, V, µ) = −kB T ln Q
X
= −kB T exp [−β(ǫi − µ)] (7.24)
i
X
∂G
= − exp[−β(ǫi − µ) (7.25)
∂µ T,V i
X
∂G
hN i = − = exp[−β(ǫi − µ)]
∂µ T,V i
X
= λ exp(−βǫi ) = λQ1 (T, V, µ) (7.26)
i
∞
X
Q(T, V, µ) = λN Q(T, V, N )
N =0
X∞
QN
1
= λN ,
N!
N =0
X∞
QN
1
= exp(βµN )
N!
N =0
= exp(λQ1 ) (7.27)
Y 1
Q(T, V, µ) = (7.30)
i 1 − exp[−β(ǫi − µ)]
X
ln Q = − ln [1 − exp {−β(ǫi − µ)}] (7.31)
i
!
∂G X exp[−β(ǫi − µ)]
hNi = − = (7.32)
∂µ T,V i 1 − exp[−β(ǫi − µ)]
Y
Q = 1 + exp[−β(ǫi − µ)] (7.33)
i
X
G = −kB T ln Q = −kB T ln [1 + exp{−β(ǫi − µ)}] (7.34)
i
!
∂G X exp[−β(ǫi − µ)]
hNi = − = (7.35)
∂µ T,V i 1 + exp[−β(ǫi − µ)]
VN 1
Q(T, V, N) = (7.38)
N! Λ3N
where the thermal wave length is given by
h
Λ = √ (7.39)
2πmkB T
The free energy is given by
When you take the partial derivative of the free energy with respect to N
keeping temperature and volume constant, you get the chemical potential5 .
5
In thermodynamics we have
F = U − TS (7.42)
dF = dU − T dS − SdT (7.43)
Therefore,
!
∂F
µ= = kB T [3 ln Λ − ln V + 1 + ln N − 1]
∂N T,V
h i
= kB T ln Λ3 + ln(N/V )
= kB T ln(ρΛ3 )
µ
= ln(ρΛ3 ) (7.46)
kB T
where ρ = N/V is the number density, i.e. number of particles per unit
volume. Thus we get,
ǫi
ηi = − ln(ρΛ3 ) (7.47)
kB T
We have earlier shown that classical limit obtains when ρΛ3 → 0. Thus in
this limit, ηi → ∞ or exp(−ηi ) → 0. Let xi = exp(−ηi ). Let us express the
grand canonical partition function for the three statistics in terms of the small
parameter xi .
Y
exp(xi ) Maxwell − Boltzmann
i
Y 1
Bose − Einstein
Q= i 1 − xi (7.48)
Y
[1 + xi ] Fermi − Dirac
i
Therefore,
∂F
µ = (7.45)
∂N T,V
7.5. ALL STATISTICS ARE SAME AT HIGHT T , AND/OR LOW ρ 123
For all the three statistics, the grand canonical partition function take the
same expression. bosons, fermions and classical
When do we get ρΛ3 → 0 ?
Note that Λ is inversely proportional to square-root of the temperature.
h
Λ= √
2πmkB T
7.5.2 Easier : λ → 0
Another simple way to show that the three statistics are identical in the limit
of high temperatures and low densities is to recognise (see below) that ρΛ3 → 0
implies λ → 0. Here λ = exp(βµ), is the fugacity. Let us show this first.
We have shown, see Eq. (7.46), that
µ
= ln(ρΛ3 ) (7.50)
kB T
Therefore the fugacity is given by
µ
λ = exp = exp(ln[ρΛ3 ]) = ρΛ3 (7.51)
kB T
Y
∼
λ→0 (1 + λ exp(−βǫi )) (7.53)
i
Thus in the limit of high temperatures and low densities Maxwell Boltzmann
statistics and Bose Einstein statistics go over to Fermi - Dirac statistics.
c
7.5.3 Easiest Method : Ω =1
We could have shown easily that in the limit of high temperature and low
density, the three statistics are identical by considering the degeneracy factor
1
Maxwell − Boltzmann statistics
b =
Ω n1 !n2 ! · · · (7.57)
1 Bose − Einstein and Fermi − Dirac statistics
When the temperature is high the number of quantum states that become
available for occupation is very large; When the density is low the number of
particles in the system is low. Thus we have a very few number of particles
occupying a very large of quantum states. In other words the number of
quantum states is very large compared to the number of particles. Hence the
number of micro states with ni = 0 or 1 ∀ i are overwhelmingly large compared
to the those with ni ≥ 2. In any case, in Fermi-Dirac statistics ni is always
0 or 1. In other words, when particles are a few in number(number density
7.6. MEAN OCCUPATION NUMBER : hNK i 125
is low) and the accessible quantum levels are large in number (temperature is
high), then micro states with two or more particles in one or more quantum
states are very rare. In other words bunching of particles is very rare at low
densities and high temperatures. Almost always every quantum state is either
unoccupied or occupied by one particle. Very rarely will you find two or more
particles in a single quantum state. Hence the degeneracy factor is unity at
high temperatures and/or low densities. Hence all the three statistics are
identical in the limit of high temperatures and/or low densities.
all
X
nxnk
n=0
= all
(7.58)
X
xnk
n=0
∞
X 1
xnk = 1 + xk + x2k + x3k + · · · = (7.64)
n=0 1 − xk
Therefore,
xk xk 1
hnk iBE = 2
(1 − xk ) = = −1
(1 − xk ) 1 − xk xk − 1
1
= (7.65)
exp[β(ǫk − µ)] − 1
!
∞ ∞
YX xni X xn xnk ∞
X
n k n
i6=k n=0
n! n=0 n! n!
hnk iM B = ! = n=0
∞
X xn
(7.66)
∞ ∞
YX xni X xnk k
6
Consider
1
S(x) = 1 + x + x2 + · · · = (7.60)
1−x
dS 1
= 1 + 2x + 3x2 + 4x3 + · · · = (7.61)
dx (1 − x)2
dS x
x = x + 2x2 + 3x3 + · · · = (7.62)
dx (1 − x)2
7.6. MEAN OCCUPATION NUMBER 127
In the above the summation in the numerator and the denominator are eval-
uated analytically as follows. We start with the definition,
∞
X xn
exp(x) = (7.67)
n=0 n!
Differentiate both sides of the above equation with respect to x. You get
∞ ∞
X nxn−1 1X nxn
exp(x) = = (7.68)
n=0 n! x n=0 n!
Therefore,
xk exp(xk )
hnk iM B = = xk
exp(xk )
= exp[−β(ǫk − µ)]
1
= (7.69)
exp[β(ǫk − µ)]
0 Maxwell − Boltzmann
where a = − 1 Bose − Einstein
+ 1 Fermi − Dirac
Variation of hnk i with energy is shown in the figure, see next page. Note that
ǫk − µ
the x axis is and the y axis is hnk i.
kB T
• Behaviour hnk i in Fermi-Dirac Statistics
We see that for Fermi-Dirac statistics 0 ≤ hnk i ≤ 1 for all k and at all
temperatures. For T > 0, we find that
hnk i → 0 when ǫk − µ → ∞ and
128 7. QUANTUM STATISTICS
1.5
←− M axwell − Boltzmann
1
← Bose − E instein
hnk i
0.5
F ermi − Dirac →
0
−0.5
−1
−3 −2 −1 0 1 2 3 4 5
ǫk − µ
kB T
Figure 7.1: Average occupation number of a quantum state under Bose-
Einstein, Fermi-Dirac, and Maxwell-Boltzmann statistics
At T = 0, we find that hnk i is zero for positive values of ǫk − µ and unity for
negative values.
• Behaviour of hnk i under Bose-Einstein Statistics
For bosons we must have
ǫk > µ ∀ k.
For the indistinguishable classical particles hnk i takes the familiar expo-
nential decay form,
hnk i = exp[−β(ǫk − µ)].
• At high T and/or Low ρ all the three statistics give the same hnk i
When
ǫk − µ
→ ∞,
kB T
all the three statistics coincide. We have already seen that at high tempera-
tures classical behaviour obtains. Then, the only way
ǫk − µ
kB T
can become large at high temperature (note in the expression T is in the
denominator) is when µ is negative and its magnitude also should increase
with increase of temperature.
Thus for all the three statistics, at high temperature, the chemical potential
µ is negative and its magnitude must be large.
But then we know
µ
= ln(ρΛ3 ).
kB T
This means that ρΛ3 << 1 for classical behaviour to emerge7 . This is in
complete agreement with our earlier surmise that classical behaviour obtains
at low ρ and/or high T . Hence all the approaches are consistent with each
other and all the issues fall in place.
In the above P (n) ≡ P (nk = n) is the probability that the random variable
nk takes a value n. Comparing the above with the first equation, we find
1
P (n) ≡ P (nk = n) = all
xnk (7.73)
X
xm
k
m=0
In what follows, I shall work out explicitly P (n) for Fermi-Dirac, Bose-Einstein,
and Maxwell-Boltzmann statistics.
We thus have,
1 − ζ for n = 0
P (n) = (7.75)
ζ for n = 1
7.7. DISTRIBUTION OF hNK i 131
1
X
hn2k i = n2 P (n) = ζ (7.77)
n=0
consistent with the result obtained earlier. Inverting the above, we get, xk =
ζ/(1 + ζ). Then the probability distribution of the random variable nk can be
written in a convenient form
ζn
P (n) = (7.81)
(1 + ζ)n+1
1 1
= ! (7.83)
1+ζ ζz
1−
1+ζ
1
= (7.84)
1 + ζ(1 − z)
Let us now differentiate P̃ (z) with respect to z and in the resulting expression
set z = 1. We shall get hnk i, see below.
∂ P̃ ζ
= (7.85)
∂z (1 + ζ(1 − z))2
∂ P̃
= ζ (7.86)
∂z z=1
∂2P 2ζ 2
= (7.87)
∂z 2 [1 + ζ(1 − z)]3
∂ 2 P
= hnk (nk − 1)i = 2ζ 2 (7.88)
2
∂z z=1
hn2k i = 2ζ 2 + ζ (7.89)
X∞
dS 1
= nxn−1 = 1 + 2x + 3x2 + 4x3 + · · · =
dx n=1 (1 − x)2
X∞
dS x
x = nxn = x + 2x2 + 3x3 + 4x4 + · · · =
dx n=1 (1 − x)2
! ∞
d dS X 2x 1
x = n2 xn−1 = 1 + 22 x + 32 x2 + 42 x3 + · · · = 3
+
dx dx n=1 (1 − x) (1 − x)2
! ∞
d dS X 2x2 x
x x = n2 xn = x + 22 x2 + 32 x3 + 42 x4 + · · · = 3
+
dx dx n=1 (1 − x) (1 − x)2
You can employ the above trick to derive power series for ln(1 ± x), see below.
Z
dx x2 x3 x4
= − ln(1 − x) = x + + + ···
1−x 2 3 4
xnk 1
P (nk = n) ≡ P (n) = (7.93)
n! exp(xk )
ζn
= exp(−ζ) (7.94)
n!
The random variable nk has Poisson distribution. The variance equals the
mean. Thus the relative standard deviation is given by
σ 1
ηM B = = √ (7.95)
ζ ζ
134 7. QUANTUM STATISTICS
We can now write the relative fluctuations for the three statistics in one
single formula as,
s
1
η = − a with (7.96)
ζ
+1 for Fermi − Dirac Statistics
a = 0 for Maxwell − Boltzmann Statistics
−1 for Bose − Einstein Statistics
P (n)
r= .
P (n − 1)
P (n) ζ
r= =
P (n − 1) ζ +1
r is independent of n. This means, a new particle will get into any of the
quantum states, with equal probability irrespective of how abundantly or how
sparsely that particular quantum state is already populated. An empty quan-
tum state has the same probability of acquiring an extra particle as an abun-
dantly populated quantum state.
Thus, compared to classical particles obeying Maxwell-Boltzmann statis-
tics, bosons exhibit a tendency to bunch together. By nature, bosons like to be
together. Note that this "bunching-tendency" is not due to interaction between
bosons. We are considering ideal bosons. This bunching is purely a quantum
mechanical effect; it arises due to symmetry property of the wave function.
For fermions, the situation is quite the opposite. There is what we may call
an aversion to bunching; call it anti-bunching if you like. No fermion would
like to have another fermion in its quantum state.
7.8. GRAND CANONICAL FORMALISM WITH CONSTANT N 135
From the formulae above, it is clear that we can keep hNi the same for all T
by introducing a suitable dependence of µ on T . Of course such dependence
of µ on T shall be different for different statistics.
I must say there is nothing unphysical about this strategy. We are studying
a physical system enclosed by a non-permeable wall - a wall that does not
permit particle exchange. The chemical potential µ, is a well defined property
of the system. It is just that µ is not any more under our control10 . The
system automatically selects the value of µ depending on the temperature.
9
This is permitted in grand canonical formalism since T and µ are independent properties
of the open system. T is determined by the heat bath and µ is determined by the particle
bath.
10
µ is not determined by us externally by adjusting the particle bath.
136 7. QUANTUM STATISTICS
Bose-Einstein Condensation
8
8.1 Introduction
For bosons we found that the grand canonical partition function is given by,
Y 1
Q(T, V, µ) = (8.1)
i 1 − exp[−β(ǫi − µ)]
The correspondence with thermodynamics is established by the expression for
grand potential denoted by the symbol G(T, V, µ). We have,
G(T, V, µ) = −kB T ln Q(T, V, µ)
X
= kB T ln [1 − exp {−β(ǫi − µ)}] (8.2)
i
137
138 8. BOSE-EINSTEIN CONDENSATION
X
8.2 hN i = hnk i
k
For bosons, we found that the average occupancy of a (single-particle) quantum
state k, is given by,
λ exp(−βǫk )
hnk i = (8.5)
1 − λ exp(−βǫk )
where λ is fugacity. We have
λ = exp(βµ). (8.6)
In the above µ is the chemical potential and equals the energy change due to
addition of a single particle under constant entropy and volume :
!
∂U
µ= . (8.7)
∂N S,V
X λ exp(−βǫk )
= (8.8)
k 1 − λ exp(−βǫk )
that the average number of particles remains at the value chosen by us. In
other words in a closed system, µ is a function of temperature.
In what follows we shall study a closed system of ideal bosons employing the
grand canonical ensemble formalism in which the chemical potential depends
on temperature; the dependence is such that the average number of bosons in
the system remains the same at all temperatures
We need an expression for the density of states. We have done this exercise
earlier. In fact we have carried out classical counting and quantum counting
and found both lead to the same result. The density of states is given by,
3/2
2m
g(ǫ) = V 2π ǫ1/2 . (8.10)
h2
We then have,
3/2 Z
2m ∞ λ exp(−βǫ) 1/2
N = V 2π ǫ dǫ. (8.11)
h2 0 1 − λ exp(−βǫ)
We note that 0 ≤ λ < 1. This suggests that the integrand in the above can
be expanded in powers of λ. To this end we write
X∞
1
= λk exp(−kβǫ). (8.12)
1 − λ exp(−βǫ) k=0
This gives us
X∞ X∞
λ exp(−βǫ)
= λk+1 exp[−β(k + 1)ǫ] = λk exp[−βkǫ].(8.13)
1 − λ exp(−βǫ) k=0 k=1
140 8. BOSE-EINSTEIN CONDENSATION
3/2 X
∞ Z
2m k
∞ exp(−kβǫ)(kβǫ)1/2 d(kβǫ)
= V 2π λ ,
h2 k=1 0 β 3/2 k 3/2
!3/2 ∞ Z
2mkB T X λk ∞
= V 2π exp(−x)x1/2 dx,
h2 k=1 k
3/2 0
!3/2 ∞
2mkB T X λk
= V 2π Γ(3/2),
h2 k=1
k 3/2
!3/2 ∞
2mkB T 1 X λk
= V 2π Γ(1/2) ,
h2 2 k=1
k 3/2
!3/2 ∞
2mkB T √ X λk
= V π π ,
h2 k=1 k
3/2
!3/2 ∞
2πmkB T X λk
= V . (8.14)
h2 k=1 k
3/2
We have earlier defined a thermal wave length denoted by the symbol Λ. This
is the de Broglie wave length associated with a particle having thermal energy
of the order of kB T . It is also called quantum wavelength. It is given by, see
earlier notes,
h
Λ= √ . (8.15)
2πmkB T
The sum over k, in the expression for N given by Eq. (8.14) is usually denoted
by the symbol g3/2 (λ):
∞
X λk λ2 λ3
g3/2 (λ) = 3/2
= λ + √ + √ +··· (8.16)
k=1 k 2 2 3 3
8.4. GRAPHICAL INVERSION AND FUGACITY 141
V
Thus we get N = g3/2 (λ) We can write it as,
Λ3
NΛ3
= ρΛ3 = g3/2 (λ).
V
(8.17)
It is easily verified, see below, that at high temperature we get results consis-
tent with Maxwell Boltzmann statistics :
The fugacity λ is small at high temperature. For small λ we can replace g3/2 (λ)
by λ. We get N = λV /Λ3 . This result is consistent with Maxwell-Boltzmann
statistics, as shown below.
For Maxwell-Boltzmann statistics, hnk i = λ exp(−βǫk ). Therefore,
X X 3/2 Z ∞
2m
N= hnk i = λ exp(−βǫk ) = λ 2πV dǫ ǫ1/2 exp(−βǫ) (8.18)
k k
h2 0
3/2 Z
2m ∞ (βǫ)1/2 exp(−βǫ) d(βǫ)
= λ 2πV (8.19)
h2 0 β 3/2
3/2
2mkB T
= λ 2πV Γ(3/2) (8.20)
h2
3/2
2mkB T 1
= λ 2πV Γ(1/2) (8.21)
h2 2
3/2
2mkB T √
= λ πV π (8.22)
h2
3/2
2πmkB T V
= λV = λ (8.23)
h2 Λ3
4.5
3.5
1.5
0.5
0
0 0.2 0.4 0.6 0.8 1
λ
Figure 8.1: g3/2 (λ) versus λ. Graphical inversion to determine fugacity
8.5. TREATMENT OF THE SINGULAR BEHAVIOUR 143
First we plot the so-called Bose function : g3/2 (λ) versus λ, see Fig. 8.1.
For given values of N, V T we can find the value of fugacity by graphical
inversion :
• read off the value of λ at which the line cuts the curve g3/2 (λ).
Once we get the fugacity, we can determine all other thermodynamic properties
of the open system employing the formalism of grand canonical ensemble.
So far so good.
But then we realise that the above graphical inversion scheme does not
permit evaluation of the fugacity of a system with NΛ3 /V greater than 2.612.
This is absurd.
There must be something wrong with what we have done.
In the above, we have separated the ground state occupancy and the occupancy
of all the excited states. Let N0 denote the ground state occupancy. It is given
by the first term,
λ
N0 = (8.25)
1−λ
The occupancy of all the excited states is given by the second term, where the
sum is taken only over the indices k representing the excited states. Let Ne
1
The chemical potential approaches the energy of the ground state. With out loss of
generality, we can set the ground state at zero energy; i.e. ǫ0 = 0.
144 8. BOSE-EINSTEIN CONDENSATION
λ V
= + 3 g3/2 (λ) (8.28)
1−λ Λ
We thus have,
NΛ3 Λ3 λ
= + g3/2 (λ) (8.29)
V V 1−λ
Let us define the number of density - number of particles per unit volume,
denoted by the symbol ρ. It is given by
N
ρ = (8.30)
V
The function λ/(1 − λ) diverges at λ = 1, as you can see from Figure (8.2).
Hence the relevant curve for carrying out graphical inversion should be the one
that depicts the sum of the singular part (that takes care of the occupancy
of the ground state) and the regular part (that takes care of the occupancy
of the excited states). For a value of Λ3 /V = .05 we have plotted both the
curves and their sum in Figure (8.3). Thus for any value of ρΛ3 we can now
determine the fugacity by graphical inversion.
We carry out such an exercise and obtain the values of λ for various values
of ρΛ3 and Fig. (8.4) depicts the results. It is clear from Fig. (8.4) that when
ρΛ3 > 2.612, the fugacity λ is close unity.
Let us postulate2
a
λ=1− .
N
2 a
We have reasons to postulate λ = 1 − . This is related to the mechanism underlying
N
Bose-Einstein condensation; we shall discuss the details later. In fact, following Donald A
McQuarrie, Statistical Mechanics, Harper and Row (1976)p.173 we can make a postulate
a
λ = 1 − . This should also lead to the same conclusions.
V
8.5. TREATMENT OF THE SINGULAR BEHAVIOUR 145
100
90
80
70
60
50
40
λ
30 →
1 −λ
20
10
λ
0
0 0.2 0.4 0.6 0.8 1
Figure 8.3: ρΛ3 versus λ. The singular part [Λ3 /V ][λ/(1 − λ)] (the bottom
most curve), the regular part g3/2 (λ) (the middle curve) , and the total (ρΛ3 )
are plotted. For this plot we have taken Λ3 /V as 0.05
146 8. BOSE-EINSTEIN CONDENSATION
N
≈ if N >> a (8.32)
a
We start with,
Λ3 λ
ρΛ3 = + g3/2 (λ) (8.33)
V 1−λ
Sunstitute λ = 1 − a/N in the above and get3 ,
ρΛ3
ρΛ3 = + g3/2 (1) (8.34)
a
Thus we get,
ρΛ3
a = (8.35)
ρΛ3 − g3/2 (1)
Thus λ is less than unity and can be very close to unity; the value of 1 − λ
can be as small as the inverse of the total number of particles in the system.
Precisely 1 − λ can be as small as a/N.
The point ρΛ3 = g3/2 (1) = 2.612 is a special point indeed. What is the
physical significance of this point ? To answer this question, consider the
quantity ρΛ3 as a function of temperature with ρ kept at a constant value.
The temperature dependence of this quantity is shown below.
!3
3 h
ρΛ = ρ √ (8.36)
2πmkB T
At high temperature for which ρΛ3 < g3/2 (1) = 2.612, we can determine
the value of λ from the equation g3/2 (λ) = ρΛ3 by graphical or numerical
inversion.
At low temperatures for which ρΛ3 > 2.612, we have λ = 1 − a/N where
ρΛ3
a = (8.37)
ρΛ3 − g3/2 (1)
3
g3/2 (1 − a/N ) ≈ g3/2 (1),
8.5. TREATMENT OF THE SINGULAR BEHAVIOUR 147
0.9
0.8
0.7
0.6
λ
0.5
0.4
3 Λ3 λ
0.3 ρΛ = + g3/2 (λ)
3
V 1 −λ
0.2 Λ
= .05
0.1 V
2.612
0
0 0.5 1 1.5 2 2.5 3 3.5 4
3
ρΛ
λ
N0 = (8.38)
1−λ
N
= (8.39)
a
N0 1
= (8.40)
N a
1
= 1− g3/2 (1) (8.41)
ρΛ3
Therefore,
!3 √ !3
N0 1 ρΛ3BEC ΛBEC T
= =1− 3
=1− =1− √ (8.43)
N a ρΛ Λ TBEC
3/2
T
= 1− fer T < TBEC (8.44)
TBEC
We have depicted the behaviour of the fractional number of particles in the
ground state as a function of temperature in Figure (8.5).
1.5
3 43/2
N0 T
1 =1−
N TBEC
N0
N
0.5
0
0 0.5 1 1.5
T
TBEC
X
G(T, V, µ) = −kB T ln Q(T, V, µ) = kB T ln[1 − λ exp(−βǫk )] (8.47)
k
Now we shall be careful and separate the singular part and regular part to get,
X
G = kB T ln(1 − λ) + kB T ln[1 − λ exp(−βǫk )] (8.48)
k
In the above, convert the sum over k by an integral over dǫ by the prescription
below,
X 3/2 Z
2m ∞
(·) −→ V 2π (·) ǫ1/2 dǫ, (8.49)
k
h2 0
We get,
G = kB T ln(1 − λ)
3/2 Z
2m ∞
−kB T V 2π dǫ ln (1 − λ exp(−βǫ)) ǫ1/2 (8.50)
h2 0
We have
∞
X
ln[1 − λ exp(−βǫ)] = − λk exp(−kβǫ) (8.51)
k=1
150 8. BOSE-EINSTEIN CONDENSATION
Then we have,
G = kB T ln(1 − λ)
3/2 X
∞ Z
2m ∞
−kB T V 2π λk dǫ ǫ1/2 exp(−kβǫ)
h2 k=1 0
= kB T ln(1 − λ) −
3/2 X
∞ Z
2m ∞ (kβǫ)1/2 exp(−kβǫ)
kB T V 2π λk d(kβǫ)
h2 k=1 0 k 3/2 β 3/2
!3/2 ∞
2mkB T X λk
= kB T ln(1 − λ) − kB T V 2π Γ(3/2)
h2 k=1 k
3/2
!3/2 ∞
2πmkB T X λk
= kB T ln(1 − λ) − kB T V
h2 k=1 k
3/2
V
= kB T ln(1 − λ) − kB T g3/2 (λ) (8.52)
Λ3
Thus we have,
V
G(T, V, λ) = kB T ln(1 − λ) − kB T g3/2 (λ) (8.53)
Λ3
λ exp(−βǫi )
hni i = (8.54)
1 − λ exp(−βǫi )
This immediately suggests that the (average) energy of the system is given by
X X ǫi λ exp(−βǫ )i)
U = hEi = hni i ǫi = (8.55)
i i 1 − λ exp(−βǫi )
8.8. ENERGY OF BOSONIC SYSTEM 151
We can also derive4 the above relation employing Grand canonical formalism.
Let us now go to continuum limit by converting the sum over micro states
by an integral over energy and get,
3 1
U = V kB T 3 g5/2 (λ) (8.61)
2 Λ
4
An open system is described by a grand canonical partition function. It is formally
given by,
X
Q(β, V, µ) = exp[−β(Ei − µNi )] (8.56)
i
In the above Ei is the energy of the open system when in micro state i; Ni is the number
of particles in the open system when in micro state i. Let γ = βµ. Then we get,
X
Q(β, V, µ) = exp(−βEi ) exp(+γNi ) (8.57)
i
We differentiate Q with respect to the variable β, keeping γ constant. (Note that T and µ
are independent in an open system described by grand canonical ensemble). We get
∂Q X
= − ǫi exp[−βǫi + γNi ] (8.58)
∂β i
1 ∂Q ∂ ln Q
− = − = hEi = U (8.59)
Q ∂β ∂β
Y 1 X X ǫi λ exp(−βǫi )
Q= ln Q = − ln[1 − λ exp(−βǫi )]U = . (8.60)
i
1 − λ exp(−βǫi ) i i
1 − λ exp(−βǫi )
152 8. BOSE-EINSTEIN CONDENSATION
We can write Eq. (8.61) in a more suggestive form, see below. We have,
8.8.2 T ≤ TBEC
For temperatures less that TBEC , the ground state gets populated anomalously.
The bosons in the ground state do not contribute to the energy. For T ≤ TBEC ,
we have µ = 0. This means λ = 15 . Substituting λ = 1 in Eq. ((8.61) we get,
3 1
U = V kB T 3 ζ(5/2) (8.64)
2 Λ
We also have,
N(ΛBEC )3
V = (8.65)
ζ(3/2)
Hence for T < TBEC , we have
3 (ΛBEC )3 ζ(5/2)
U = N kB T
2 Λ3 ζ(3/2)
3/2
3 ζ(5/2) T
= N kB T (8.66)
2 ζ(3/2) TBEC
Thus we have,
3 g5/2 (λ)
NkB T for T > TBEC
2 g3/2 (λ)
U = (8.67)
3/2
3 g5/2 (1) T
NkB T for T < TBEC
2 g3/2 (1) TBEC
5
we have postulated that λ = 1 − O(1/N ) for T ≤ TBEC .
8.9. SPECIFIC HEAT CAPACITY OF BOSONS 153
∂ h i 3
First Relation : g3/2 (λ) = − g3/2 (λ)
∂T 2T
Proof : We start with ρΛ3 = g3/2 (λ). Therefore,
!
∂ ∂Λ ∂ h
[g3/2 (λ)] = 3 ρΛ2 = 3 ρΛ3 √ (8.70)
∂T ∂T ∂T 2πmkB T
3 h 1
= − ρΛ2 √
2 2πmkB T 3/2
3 h
= − ρΛ2 √
2T 2πmkB T
3 3
= − ρΛ3 = − g3/2 (λ) (8.71)
2T 2T
————–Q.E.D
∂ 1
Second Relation : [gn/2 (λ)] = g(n/2)−1 (λ)
∂λ λ
∞
X λk
Proof : We have by definition, gn/2 (λ) = n/2
. Therefore,
k=1 k
" ∞
# ∞ ∞
∂ ∂ X λk X kλk−1 1X λk
[gn/2 (λ)] = = = (8.72)
∂λ ∂λ k=1 k n/2 k=1
k n/2 λ k=1 k (n/2)−1
1
= g(n/2)−1 (λ) (8.73)
λ
154 8. BOSE-EINSTEIN CONDENSATION
— Q.E.D
1 dλ 3 g3/2(λ)
8.9.2 Third Relation : =−
λ dT 2T g1/2(λ)
Proof :
We proceed as follows :
" #
∂ ∂ dλ 3 1 dλ
[g3/2 (λ)] = [g3/2 (λ)] : − g3/2 (λ) = g1/2 (λ) (8.74)
∂T ∂λ dT 2T λ dT
1 dλ 3 g3/2 (λ)
From the above we get, =− –Q.E.D
λ dT 2T g1/2 (λ)
We have,
" #
CV ∂ 3T g5/2 (λ)
=
NkB ∂T 2 g3/2 (λ)
" #
3 g5/2 (λ) 3T ∂ g5/2 (λ)
= +
2 g3/2 (λ) 2 ∂T g3/2 (λ)
3 g5/2 (λ) 3T g5/2 (λ) ∂g3/2 (λ) 1 ∂g5/2 (λ) dλ
= − 2
−
2 g3/2 (λ) 2 g3/2 (λ) ∂T g3/2 (λ) ∂λ dT
3 g5/2 (λ) 3T g5/2 (λ) 3 1 1 dλ
= − 2
− g3/2 (λ) − g3/2 (λ)
2 g3/2 (λ) 2 g3/2 (λ) 2T g3/2 (λ) λ dT
3 g5/2 (λ) 3T g5/2 (λ) 3 1 dλ
= − 2
− g3/2 (λ) −
2 g3/2 (λ) 2 g3/2 (λ) 2T λ dT
3 g5/2 (λ) 3T g5/2 (λ) 3 3 g3/2 (λ)
= − 2
− g3/2 (λ) +
2 g3/2 (λ) 2 g3/2 (λ) 2T 2T g1/2 (λ)
3 g5/2 (λ) 9 g5/2 (λ) 9 g3/2 (λ)
= + −
2 g3/2 (λ) 4 g3/2 (λ) 4 g1/2 (λ)
15 g5/2 (λ) 9 g3/2 (λ)
= − (8.75)
4 g3/2 (λ) 4 g1/2 (λ)
8.9. HEAT CAPACITY BELOW TC 155
1.8
1.6
Classical : 3NkB /2
1.4
1.2
Nk B
CV
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
T
TBEC
CV
8.9.3 for T < TBEC
N kB
Now, let us consider the case with T < TBEC . We have,
3/2
U 3 g5/2 (1) T
= T (8.76)
NkB 2 g3/2 (1) TBEC
3/2
1 3 g5/2 (1) 5 T
CV = (8.77)
NkB 2 g3/2 (1) 2 TBEC
Thus we have,
15 g5/2 (λ) 9 g3/2 (λ)
− for T > TBEC
4 g3/2 (λ) 4 g1/2 (λ)
1
CV = (8.78)
NkB
3/2
15 g5/2 (1) T
for T < TBEC
4 g3/2 (1) TBEC
The specific heat is plotted against temperature in Figure (8.6). The cusp in
the heat capacity at T = TBEC is the signature of Bose-Einstein condensa-
tion. Asymptotically T → ∞, the heat capacity tends to the classical value
consistent with equi-partition.
156 8. BOSE-EINSTEIN CONDENSATION
The largest value that N0 can take is N, i.e. when all the particles condense
into the ground state. In other words, the smallest value that 1/N0 can take
is 1/N. Hence (ǫ0 − µ)/[kB T ] can not be smaller than 1/N. The smallest
possible value it can take is 1/N - inverse of the number of particles in the
entire system.
ǫ0 − µ 1
≥ (8.83)
kB T N
Thus, measured in units of kB T , the chemical potential shall always be less that
the ground state energy at any non-zero temperature. At best, the quantity
(ǫ − µ), expressed in units of thermal energy (kB T ), can only be of the order
of 1/N.
But, remember. For a physic.ist, small is zero and large is infinity
Therefore the chemical potential can never take a value close to any of
the excited states, since all of them invariably lie above the ground state. In
a sense, the ground state forbids the chemical potential to come close to any
energy level other than the ground state energy. It sort of guards all the excited
states from a close visit of µ. As T → 0, the number of bosons in the ground
state increases.
This precisely is the subtle mechanism underlying Bose-Einstein
condensation.
158 8. BOSE-EINSTEIN CONDENSATION
Elements of Phase Transition
9
I shall provide an elementary introduction to phase transition. The topics
covered include phase diagram of a normal substance; coexiatance curves : (i)
sublimation curve, (ii) melting curve, and (iii) vapour pressure curve; triple
point; critical point; first order phase transition; latent heat; second order
phase transition; critical phenomena; derivation of Gibbs - Duhem relation
starting from Gibbs free energy; Clausius-Clapeyron equation; Anomalous ex-
pansion of water upon freezing; and an exotic behaviour of Helium-3 at low
temperatures.
160 9. ELEMENTS OF PHASE TRANSITION
161
162 9. ELEMENTS OF PHASE TRANSITION
163
164 9. ELEMENTS OF PHASE TRANSITION
165
166 9. ELEMENTS OF PHASE TRANSITION
167
168 9. ELEMENTS OF PHASE TRANSITION
Statistical Mechanics of Harmonic
10
Oscillators
p2 1
E = + mω 2 q 2 (10.1)
2m 2
where q and p are the position and momentum respectively, of the one dimen-
sional harmonic oscillator. ω is the characteristic frequency of the oscillator
and m its mass. A simple pendulum executing small oscillations is a neat
example of an harmonic oscillator.
We have,
Z Z " !#
1 +∞ +∞ p2 1
Q1 (T ) = dq dp exp −β + mω 2 q 2 (10.2)
h −∞ −∞ 2m 2
169
170 10. STATISTICAL MECHANICS OF HARMONIC OSCILLATORS
3N
!
kB T X
F (T, V, N) = −kB T ln (10.12)
i=1 ~ωi
F (T, V, N) = U − T S (10.15)
dF = dU − T dS − SdT
! !
∂F ~ω
µ = = kB T ln (10.18)
∂N T,V
kB T
! " ! #
∂F kB T
S = − = NkB ln +1 (10.19)
∂T V,N
~ω
We also have,
!
∂ ln Q
U = − = 3NkB T, (10.20)
∂β V,N
consistent with equipartition theorem which says each quadratic term in the
Hamiltonian carries kB T /2 of energy. The Hamiltonian of a single harmonic
oscillator has two quadratic terms - one in position q and the other in momen-
tum p.
We also find that the results are consistent with the Dulong and Petit’s
law which says that the heat capacity at constant volume is independent of
temperature:
!
∂U
CV = = 3NkB = 3nR (10.21)
∂T V
CV
= 3R ≈ 6 calories (mole)−1 (kelvin)−1 (10.22)
n
More importantly, the heat capacity is the same for all the materials; it de-
pends only on the number of molecules or the number of moles of the substance
and not on what the substance is. The heat capacity per mole is approximately
6 calories per Kelvin.
exp(−β~ω/2)
= (10.24)
1 − exp(−β~ω)
exp(−3Nβ~ω/2)
QN (T ) = (10.25)
[1 − exp(−β~ω)]3N
If the harmonic oscillators are all of different frequencies, the partition function
is given by
3N
Y exp(−β~ωi/2)
Q(T ) = (10.26)
i=1 1 − exp(−β~ωi )
Z " #
∞ ~ω
= dω + kB T ln {1 − exp(−β~ω)} g(ω)(10.29)
0 2
Z ∞
dω g(ω) = 3N (10.30)
0
We can obtain the thermodynamic properties of the system from the free
174 10. STATISTICAL MECHANICS OF HARMONIC OSCILLATORS
energy. We get,
!
∂F 1
µ = = ~ω + kB T ln [1 − exp(−β~ω)] (10.31)
∂N T,V
2
!
∂F
P = − =0 (10.32)
∂V T,N
! " #
∂F β~ω
S = − = 3NkB − ln {1 − exp(−β~ω)}(10.33)
∂T V,N
exp(β~ω) − 1
" #
∂ ln Q ~ω ~ω
U = − = 3N + (10.34)
∂β 2 exp(β~ω) − 1
The expression for U tells that the equipartition theorem is the first victim of
quantum mechanics : Quantum harmonic oscillators do not obey equipartition
theorem. The average energy per oscillator is higher than the classical value of
kB T . Only for T → ∞, we have kB T >> ~ω, the "quantum" results coincide
with the "classical" results. Let x = k~ω
BT
. Then Eq. (10.34) can be written as
U x x
= + (10.35)
3NkB T 2 exp(x) − 1
We have plotted U measured in units of 3NkB T as a function of oscillator
energy ~ω measured in units of kB T . It is seen that when x → 0 (T → ∞),
this quantity tends to unity, see also below, implying that classical result of
equipartition of energy obtains at high temperature. Quantum effects start
showing up only at low temperatures.
In the limit x → 0 (T → ∞), we have,
U ∼ x x
x→0 + (10.36)
3NkB T 2 x + 2 + 63 + O(x4 )
x 2 x
x 1
= + (10.37)
2 1 + 2 + 62 + O(x3 )
x x
x x x2 x2
= +1− − + + O(x3 ) (10.38)
2 2 6 4
x2 x2
= 1− + + O(x3 ) (10.39)
6 4
x2
= 1+ + O(x3 ) (10.40)
12
= 1 (10.41)
10.1. CLASSICAL HARMONIC OSCILLATORS 175
!2
3N ~ω exp[β~ω]
= (10.43)
kB T (exp[β~ω] − 1)2
The second victim of quantum mechanics is the law of Dulong and Petit. The
heat capacity depends on temperature and on the oscillator frequency. The
heat capacity per mole will change from substance to substance because of its
dependence on the oscillator frequency. Only in the limit of T → ∞ (the same
as β → 0), do we get the classical results.
The temperature dependence of heat capacity is an important "quantum"
outcome. We find that the heat capacity goes to zero exponentially as T → 0.
However experiments suggest that the fall is algebraic and not exponential.
The heat capacity is found to go to zero as T 3 . This is called T 3 law.
3N
!
X ∂V
(xi − xi,0 ) +
i=1 ∂xi x1,0 ,x2,0 ,··· ,x3N,0
3N
3N X
!
X 1 ∂2V
(xi − xi,0 )(xj −(10.44)
xj,0 )
i=1 j=1 2 ∂xi ∂xj x1,0 ,x2,0 ,···x3N,0
The first term gives the minimum energy of the solid when all its atoms are
in their equilibrium positions. We can denote this energy by V0 .
The second set of terms involving the first order partial derivatives of the
potential are all identically zero by definition : V has a minimum at {xi =
xi,0 ∀i = 1, 3N}
The third set of terms involving second order partial derivatives describe
harmonic vibrations. We neglect the terms involving higher order derivatives
and this is justified if only small oscillations are present in the crystalline .
Thus under harmonic approximations we can write the Hamiltonian as,
3N
!2 3N X
3N
1 X dξi X
H = V0 + + αi,j ξiξj (10.45)
i=1 2 dt i=1 j=1
keep an arm’s length from his four neighbours. If the distance between two neighbouring
students is less, they are pushed outward; if more, they are pulled inward. Such mutual
interactions lead to the student organizing themselves in a two dimensional regular array
I shall leave it to you to visualize how such mutual nearest neighbour interactions can
give rise to three dimensional arrays.
10.1. CLASSICAL HARMONIC OSCILLATORS 177
where
ξ = xi − x̄i (10.46)
!
1 ∂2V
αi,j = (10.47)
2 ∂xi ∂xj x̄1 ,x̄2 ,··· ,x̄3N
where {ωi : i = 1, 3N} are the characteristic frequencies of the normal modes
of the system. These frequencies are determined by the nature of the potential
energy function V (x1 , x2 , · · · x3N ). Thus the energy of the solid can be consid-
ered as arising out of a set of 3N one dimensional, non interacting, harmonic
oscillators whose characteristic frequencies are determined by the nature of
the atoms of the crystalline solid, the nature of their mutual interaction, the
nature of the lattice structure etc..
Thus we can describe the system in terms of independent harmonic oscilla-
tors by defining a normal coordinate system, in which the equations of motion
are decoupled. If there are N atoms in the crystals there are 3N degrees of
freedom. Three of the degrees of freedom are associated with the translation
of the whole crystal; and three with rotation. Thus, there are strictly 3N − 6
normal mode oscillations. If N is of the order of 1025 or so, it doesn’t matter
if the number of normal modes is 3N − 6 and not 3N.
We can write the canonical partition function as
3N
Y exp(−β~ωi /2)
Q = (10.49)
i=1 1 − exp(−β~ωi )
We have,
3N
" #
X β~ωi
− ln Q = + ln {1 − exp(−β~ωi )} (10.51)
i=1 2
Z " #
∞ β~ω
= + ln {1 − exp(−β~ω)} g(ω)dω (10.52)
0 2
The problem reduces to finding the function g(ω). Once we know g(ω), we
can calculate the thermodynamic properties of the crystal. In particular we
can calculate the internal energy U and heat capacity, see below.
Z " #
∞ ~ω ~ω exp(−β~ω)
U = + g(ω)dω
0 2 1 − exp(−β~ω)
Z " #
∞ ~ω ~ω
= + g(ω)dω (10.53)
0 2 exp(β~ω) − 1
Z ∞ (β~ω)2 exp(β~ω)
CV = kB g(ω)dω (10.54)
0 [exp(β~ω) − 1]2
The problem of determining the function g(ω) is a non-trivial task. It is
precisely here that the difficulties lie. However, there are two well known
approximations to g(ω). One of them is due to Einstein and the other due to
Debye.
Let us define
~ωE
ΘE = (10.57)
kB
and call ΘE as Einstein temperature. Verify that this quantity has the unit of
temperature. In terms of Einstein temperature we have,
!2
ΘE exp(ΘE /T )
CV = 3NkB (10.58)
T [exp(ΘE /T ) − 1]2
2
transverse or longitudinal
3
transverse mode is doubly degenerate and longitudinal mode is non-degenerate, etc..
180 10. STATISTICAL MECHANICS OF HARMONIC OSCILLATORS
3
we get, α = 9N/ωD . Thus we have
9N 2
3
ω for ω ≤ ωD
ωD
g(ω) = (10.61)
0 for ω > ωD
Let
x = β~ω (10.63)
~ωD
ΘD = (10.64)
kB
ΘD is called the Debye temperature. Then we have,
3 Z
T ΘD /T x4 exp(x)
CV = (3NkB ) × 3 dx (10.65)
ΘD 0 [exp(x) − 1]2
= 3NkB (10.71)
and ζ(·) is the Riemann zeta function, see below. Note ζ(2) = π 2 /6 and
ζ(4) = π 4 /90, etc..
Riemann zeta function is defined as
∞
X
ζ(p) = n−p (10.78)
n=1
The integral and hence the series is divergent for p ≤ 1 and convergent for
p > 1.
2(2n)!
= (−1)n−1 ζ(2n), n = 1, 2, 3, · · · (10.83)
(2π)2n
π2 π4
ζ(2) = ζ(4) = (10.84)
6 90
π6 π8
ζ(6) = ζ(8) = (10.85)
945 9450
chapter-11 : Formulary
184 10. STATISTICAL MECHANICS OF HARMONIC OSCILLATORS
11
Statistical Mechanics Formulary
185
186 11. STATISTICAL MECHANICS FORMULARY
ASSIGNMENTS
12
12.1 Assignment-1
1.1 Starting from the first law of thermodynamics, show that a quasi-static
reversible adiabatic process in an ideal gas is given by
• P V γ = Θ1
• T V γ−1 = Θ2
• P 1−γ T γ = Θ3
1.2 Consider one mole of an ideal gas. A molecule of the gas is mono atomic
and is spherically symmetric. The system of ideal gas is in equilibrium.
Its temperature T1 = 300 k and its volume V1 = 1 litre. Since the system
is in equilibrium, it can be represented by a point A = (V1 , T1 ), in the
temperature-volume thermodynamic phase plane. We are taking Volume
along the X axis and Temperature along the Y axis.
Now consider a process by which the system expands to a volume V2 = 2
litres. The process is adiabatic.
Case-1 : The process is quasi static and reversible. Let T2 be the temperature
of the system at the end of the process. Let B = (V2 , T2 ) denote
the system at the end of the process. Find T2 . The process A → B
187
188 12. ASSIGNMENTS
Hint : Consider a constant-volume, quasi static reversible process that takes the
system from B ′ to B. Then take the system by a reversible adiabat from B to A and
complete the cycle. The cyclic process,
A → B ′ → B → A,
thus has three segments, one of them irreversible and the other two reversible.
W T2
η= =1−
q1 T1
Z +∞
g(x) δ(x − x0 ) dx = g(x0 )
−∞
12.2 Assignment-2
PROBLEM SET - 2 16 January 2017
slowly through a valve into the atmosphere. The leaking process is quasi
static, reversible and adiabatic2 .
(i) How many moles of gas shall be left in the chamber eventually?
(ii) What shall be the temperature of the gas left in the chamber ?
b 1 AN (2πmE)N
Ω(E, A, N) =
h2N N! Γ (N + 1)
(ii) Derive an expression for the density of states of a single particle.
(iii) show that the resulting expression is the same as the one obtained
by classical Boltzmann counting.
b 1 LN (2πmE)N/2
Ω(E, L, N) =
hN N! Γ N + 1
2
(iii) show that the resulting expression is the same as the one obtained
by classical Boltzmann counting.
2.5 A macroscopic system can be in any one of the two energy levels labelled
1 and 2, with probabilities p1 and p2 respectively. The degeneracy of the
energy level labelled 1 is g1 and that of the energy level labelled 2 is g2 .
Show that entropy of the system is given by,
2
X 2
X
S = −kB pi ln pi + pi [kB ln gi ]
i=1 i=1
12.3 Assignment-3
3.1 Consider an isolated system of N non-interacting particles occupying
two states of energies −ǫ and +ǫ. The energy of the system is E. Let
E
x= .
Nǫ
5
Hint : The micro state of a single particle in one dimension is specified by two numbers,
one for position and one for momentum. The micro state of N particles in one dimension
requires an ordered string of 2N numbers.
192 12. ASSIGNMENTS
∞
X (−ik)n
= Mn
n=0 n!
where Mn denotes the n-th moment. The coefficient of (−ik)n /n! in
the power series expansion of the characteristic function gives the n-th
moment.
6
HINT : Let n1 and n2 denote the number of particles in the two states of energy −ǫ and
+ǫ respectively. We have Ω e = N !/(n1 !n2 !); S = kB ln Ω;
e Calculate n1 and n2 by solving :
n1 + n2 = N and n2 ǫ − n1 ǫ = E.
7
HINT: Take partial derivatives of U with respect to S and V and get T and P respec-
tively. Find a relation between P and T when entropy does not change.
12.3. ASSIGNMENT-3 193
3.5 Take a p-coin8 . Toss the coin independently until "Heads" appears for
the first time whence the game stops.
• What are the elements of the sample space ?
Let n denote the number of "Tails" in a game.
• Derive an expression for P (n) - the probability distribution function
of the integer random variable n.
∞
X
The moment generating function is defined as Pe (z) = z n P (n).
n=0
• Show that the the moment generating function of the random vari-
able n is given by
p
Pe (z) = .
1 − qz
• From the moment generating function calculate the mean, ζ, and
variance σ 2 of the random variable n.
• Show that P (n) and Pe (z) can be expressed as,
ζn
P (n) =
(1 + ζ)n+1
1
Pe (z) =
1 + ζ(1 − z)
8
A p-coin is one for which the probability of "Heads" is p and that of "Tails" is q = 1 − p.
194 12. ASSIGNMENTS
12.4 Assignment-4
PROBLEM SET - 4 30 January 2017
(ii) Show that the heat capacity of the system is given by9
∂hEi ν+1
C= = 3NkB .
∂T 2ν
4.2 Consider a system of N distinguishable non-interacting particles each
of which can be in states designated as 1 and 2. Energy of state 1 is
ǫ1 = −ǫ and that of state 2 is ǫ2 = +ǫ. Let the number of particles in
states 1 and 2 be N1 and N2 respectively. We have
N = N1 + N2
and
E = N1 ǫ1 + N2 ǫ2 = (2N2 − N)ǫ.
(i) Evaluate canonical partition function Q(T, V, N). Do not forget
the degeneracy factor, Ωb which gives the number of ways we can
organize N1 particles in state 1 and N2 particles in state 2.
(ii) Let q(T, V ) be the single-particle partition function. How Q(T, V, N)
and q(t, V ) are related ?
(iii) Calculate and sketch heat capacity CV of the system.
9
When ν = 1 and b = (1/2)mω 2 , we recover the results for simple harmonic oscilla-
tors : C = 3N kB . This corresponds to the heat capacity of a crystalline solid having N
atoms/molecules organized in a lattice. We have C = 3N kB = 3nR or molar specific heat
is c = 3R = 5.958 ≈ 6 Calories ⇒ Dulong-Petit’s law.
12.4. ASSIGNMENT-4 195
4.6 A rectangle is inscribed in an ellipse whose major and minor axes are
of lengths a and b respectively. The major axis is along the X axis and
the minor axis is along the Y axis. The centre of the ellipse is at the
origin. The centre of the inscribed rectangle is also at origin. Find the
length and breath of the rectangle with largest possible area. Employ
the method of Lagrange multiplier. If you have a circle of radius R with
centre at origin instead of ellipse what would be the length and breath
of the inscribed rectangle with largest possible area ?
b N!
Ω(n1 , n2 , n3 , n4 , n5 , n6 ) = Q6
i=1 ni !
6
X
ni = N .
i=1
12.5 Assignment-5
PROBLEM SET - 5 6 February 2017
For each of the above two cases calculate average energy of the system.
5.2 A zipper has N links. Each link can be in any one of the two states
The zipper can be unzipped from top to bottom. A link can be open
if and only if all the links above it are also open. In other words, if we
number the links as 1, 2, · · · , N from top to bottom, then link k can
be open if and only if all the links from 1 to k − 1 are also open.
5.3 The expression for the average energy in statistical mechanics which
corresponds to thermodynamic internal energy is given by
∂ ln Q
hEi = U = −
∂β
kB T
hEi = U = 3N
2
consistent with equi-partition theorem which says that each degree of
freedom (each quadratic term in the Hamiltonian) carries an energy of
kB T /2.
• Case - 1 :
(iii) Obtain U(T ) in the classical limit : kB T >> ~ω. Show that we
recover the results of classical harmonic oscillator : U = NkB T ,
CV = NkB .
(iv) Derive an expression for entropy; obtain entropy in the classical
limit
• Case - 2 : "Even-harmonic oscillators" :
12.6 Assignment-6
6.1 We saw that the parameter ρΛ3 helps us to find when quantum effects
come into play toward determining the properties of an ideal gas. We
have,
N h
ρ = ; Λ= √
V 2πmkB T
12
check Λ has the dimension of length.
200 12. ASSIGNMENTS
(i) Let βǫ0 = 2. The probability of finding the system in the level of
energy 0 is
1 1
(a) cosh 2 (b)
2 cosh 2
1 1
(c) (d)
2 cosh 2 1 + 2 cosh 2
13
Note that it is meaningful to call U as thermodynamic energy only when the average
of energy is calculated for N → ∞; only in this limit the average energy will be unchanging
with time. Fluctuations around the average value, defined as the standard deviation (i.e.
square-root
√ of the variance) of energy divided by the mean energy will be of the order of
1/ N ; this goes to zero only in the limit of N → ∞.
12.7. ASSIGNMENT-8 201
" #
2 x2
(a) −NkB T x (b) −NkB T ln 2 +
2
" #
x2
(c) −NkB T ln 3 + (d) −NkB T ln 3
3
12.7 Assignment-8
8.1 We have,
S X
− = pi ln(pi ).
kB i
1
pi = exp(−βǫi ),
Q
S
− = −βU − ln Q,
kB
P
where U = hEi = i pi ǫi . From the above deduce that F = −kB T ln Q.
h(E − hEi)3 i 8
3
=
hEi 9N 2
8.3 Let hNi denote the average number of particles in an open system. Show
that
1 ∂Q
hNi = λ ,
Q(T, V, µ) ∂λ
In the above λ is the fugacity. λ = exp(βµ), where µ is the chemical
potential14 : energy change per addition of a particle at constant entropy
and volume. Q(T, V, µ) is the grand canonical partition function.
8.4 Let P (N) denote the probability that there are N particles in an open
system at temperature T and chemical potential µ. Show that
ζ N exp(−ζ)
P (N) = ,
N!
where ∞
X
ζ = hNi = N P (N).
N =0
8.7 Starting with Gibbs’ free energy G(T, P, N) and the fact that G is a first
order homogeneous function of N, derive Gibbs’-Duhem relation.
8.8 Employing the grand canonical ensemble formalism, show that the fluc-
2
tuations in the number of particles, σN = hN 2 i−hNi2 , in an open system
is related to the isothermal compressibility κT , as given below.
2 hNi2 kB T
σN = κT ,
V
where, !
1 ∂V
κT = − .
V ∂P T
12.8 Assignment-9
9.1 Consider a closed system of THREE non-interacting particles at tem-
perature T occupying
Derive expressions for the canonical partition function when the par-
ticles obey,
{n1 , n2 , n3 , n4 }
with energies
{ǫ1 = 0, ǫ2 = ǫ, ǫ3 = ǫ, ǫ4 = ǫ}
with the constraint
n1 + n2 + n3 + n4 = 3
. Keep track of the degeneracy factor Ω - the ’number’ of micro states for each string.
HINT : Let pi,n denote the probability that the i th quantum state holds n particles.
The entropy is given by
XX
S = −kB pi,n ln(pi,n ).
i n
These are single parameter distributions with the parameter given by ζi = hni i =
P
n npi,n .
HINT : Consider the distribution of n under the three statistics. It is Poisson for
Maxwell-Boltzmann statistics; geometric for bosons; and binomial for fermions. Each
is a single parameter distribution. You can conveniently take that parameter as
ζ = hnk i. Evaluate ζ employing the data given namely P (n = 0) = 0.001
9.4 Show that the grand canonical partition function for Fermions is given
by
Y
Q(T, V, µ) = [1 + exp {−β (ǫi − µ)}]
i
f (ǫ = µ + ξ) = 1 − f (ǫ = µ − ξ).
206 12. ASSIGNMENTS
Selected Problems and Solutions
13
PROBLEM - 1
The (canonical ensemble) average of energy is 1025 ǫ Joules. The particles are
identical and distinguishable. What is the value of N ?
SOLUTION
ln Q = N ln [1 + exp(−βǫ) + exp(−2βǫ)]
208 13. SELECTED PROBLEMS AND SOLUTIONS
!
∂ ln Q
U = hEi = −
∂β V,N
ǫ exp(−βǫ) + 2ǫ exp(−2βǫ)
=
1 + exp(−βǫ) + exp(−2βǫ)
ǫe−1 + 2ǫe−2
U = N
1 + e−1 + e−2
e+1
= Nǫ
e2 + e + 1
It is given that U = 1025 ǫ. Therefore,
e2 + e + 1
N = 1025 × = 2.987 × 1025 .
e+1
PROBLEM - 2
A system has
(a) a non-degnerate ground state of energy zero
(b) a non-degenerate excited state of energy 100kB Joules and
(c) a doubly-degenerate excited state of energy 300kB Joules.
The Boltzmann constant kB = 1.3807 × 10−23 .
(i) Calculate the relative fluctuations of energy given by η = σE /hEi at
T = 200 Kelvin
Note σE2 = hE 2 i − hEi2 . The angular brackets denote averaging over a canon-
ical ensemble.
SOLUTION
The canonical partition function is given by,
3
X
ǫi
Q(T ) = gi exp −
i=1 kB T
209
where
ǫ1 = 0; ǫ2 = 100kB ; ǫ3 = 300kB ;
g1 = 1; g2 = 1; g3 = 2;
ǫ2 1 ǫ3 3
T = 200K; = ; =
kB T 2 kB T 2
Therefore,
Q = 1 + e−1/2 + 2e−3/2 = 2.0528
The average energy is given by
3
1 X ǫi
hEi = ǫi gi exp −
Q i=1 kB T
kB
= 100 ∗ e−1/2 + 2 ∗ 300 ∗ e−3/2
Q
= 79.8897kB
The second moment of energy is given by
3
1 X ǫi
hE 2 i = ǫ2 gi exp −
Q i=1 kB T
= 2.2520 × 104 kB
2
σ = 127.0340kB
σ
= 1.5901
hEi
210 13. SELECTED PROBLEMS AND SOLUTIONS
PROBLEM - 3
SOLUTION
The gas in the chamber leaks out because of the pressure difference. The
pressure in the chamber decreases as the gas leaks out. The leaking continues
until the chamber is at the same pressure as the atmosphere. Since the chamber
is thermally insulated there is no transaction of energy by heat between the
chamber and the surroundings. We have,
Initla pressure Pi 10 atm
Final presure Pf 1 atm
Initial amount of gas ni 1000 moles
Final amount of gas nf ?
Initial temperature Ti 300 K
Final temperature Tf ?
• Apply ideal gas law to the nf moles of gas left in the chamber. We have
Pi1−γ Tiγ = Pf1−γ Tfγ
From the above we get,
(γ−1)/γ
Tf Pf
=
Ti Pi
Tf = 119.4K
1
P V = nRT
2
P V γ = Θ.
211
Pi V = ni RTi
Pf V = nf RTf
Pf nf Tf
=
Pi ni Ti
−1
nf Tf Pf
=
ni Ti Pi
1/γ
nf Pf
=
ni Pi
PROBLEM - 4
where,
h
Λ= √ ,
2πmkB T
is the thermal wavelength or quantum wavelength. a, and b are constants;
other symbols have their usual meaning.
(iv) Does the expression for S, provide a valid fundamental relation3 ? If not,
what is wrong with S ? How can Q be corrected ?
SOLUTION
(i)
!
V − Nb βaN 2
ln Q = N ln +
Λ3 V
∂ ln Q 3N ∂Λ aN 2
= − +
∂β Λ ∂β V
∂ ln Λ aN 2
= −3N +
∂β V
We have,
h h
Λ= √ = √ β 1/2
2πmkB T 2πm
!
1 h
ln Λ = ln β + ln √
2 2πm
∂ ln Λ 1 kB T
= =
∂β 2β 2
Therefore,
∂ ln Q 3N aN 2
= − kB T +
∂β 2 V
∂ ln Q 3NkB T aN 2
E=− = −
∂β 2 V
F = −kB T ln Q
!
V − Nb aN 2
= −NkB T ln −
Λ3 V
3
except perhaps at T = 0.
213
NkB T aN 2
= − 2
V − Nb V
U −F
S =
T !
3NkB V − Nb
= + NkB ln
2 Λ3
F = U − TS
dF = dU − T dS − SdT
= T dS − P dV + µdN − T dS − SdT
= −P dV + µdN − SdT
Therefore,
∂F
P = −
∂V N,T
∂F
µ =
∂N V,T
∂F
S = −
∂T N,V
214 13. SELECTED PROBLEMS AND SOLUTIONS
We have,
!N
V − Nb βaN 2
ln Q = N ln + +N
NΛ3 V
It is clear from the above that the expression for energy5 and pressure6
will be the same as earlier. The free energy is given by,
!
V − Nb aN 2
F = −NkB T ln − − NkB T
NΛ3 V
Entropy
!
5NkB V − Nb
S = + NkB ln
2 NΛ3
The above expression for entropy is extensive and hence consistent with
thermodynamics.
PROBLEM - 5
A molecule has
SOLUTION
The canonical partition function is given by,
3
X
ǫi
Q(T ) = gi exp −
i=1 kB T
where
ǫ1 = 0; ǫ2 = 100kB ; ǫ3 = 300kB ;
g1 = 1; g2 = 1; g3 = 2;
ǫ2 1 ǫ3 3
T = 200K; = ; =
kB T 2 kB T 2
Therefore,
Q = 1 + e−1/2 + 2e−3/2 = 2.0528
The average energy is given by
3
1 X ǫi
hEi = ǫi gi exp −
Q i=1 kB T
kB
= 100 ∗ e−1/2 + 2 ∗ 300 ∗ e−3/2
Q
= 79.8897kB
216 13. SELECTED PROBLEMS AND SOLUTIONS
3
1 X ǫi
hE 2 i = ǫ2 gi exp −
Q i=1 kB T
= 2.2520 × 104 kB
2
σ = 127.0340kB
σ
= 1.5901
hEi
PROBLEM - 6
n3 S
U = A 2 exp ,
V nR
where A and R are constants. The number of moles is n. Initially the system
is at temperature 317.48 Kelvin and pressure 2 × 105 Pascals. The system
expands reversibly until the pressure drops to 105 Pascals, by a process in
217
which the entropy does not change. What is the final temperature ?
SOLUTION
The (micro canonical) temperature is given by
!
∂U
T ≡ T (S, V, N) =
∂S V,n
1 An2
= exp(S/[nR])
V2 R
1
= φ(S, n)
V2
The (micro canonical) pressure is given by
!
∂U
P ≡ P (S, V, N) = −
∂V S,n
1
= 2nRφ(S, n)
V3
It is given that during the process of expansion, entropy and n remain the
same. Therefore,
1
T = φ(S, n)
V2
1
P = 2nRφ(S, n)
V3
T 1/2 φ1/6
= = ζ(S, n) = Θ
P 1/3 (2nR)1/3
where Θ is a constant during the process. Therefore,
1/2 1/3
T1 P1
=
T2 P2
It is given
T1 = 317.48 K
218 13. SELECTED PROBLEMS AND SOLUTIONS
,
P1 = 2 × 105 P a,
and
P2 = 105 P a.
Therefore,
T2 = 21/3 × 317.48 = 200 K.
PROBLEM - 7
1 1 1−x
(ii) Show that β = = ln
kB T 2ǫ 1+x
SOLUTION
(i) Let n1 and n2 be the number of particles in energy levels −ǫ and +ǫ
respectively. The number of microstates associated with n1 and n1 is
b N!
Ω(n 1 , n2 ) = . There are two constraints 1. n1 + n2 = N and 2.
n1 ! n2 !
n1 × (−ǫ) + n2 × (+ǫ) = E. Therefore n1 and n2 are fixed by these two
E
constraints. We get, n1 + n2 = N, and −n1 + n2 = . Solving the two
ǫ
E N
linear equations, and setting x = we get, n1 = (1 − x), and
Nǫ 2
219
" #
N N!
n2 = (1 + x). Entropy is given by, S = kB ln .
2 n1 ! n2 !
S
= N ln N − N − n1 ln n1 + n1 − n2 ln n2 + n2
kB
= N ln N − n1 ln n1 − n2 ln n2
= (n1 + n2 ) ln N − n1 ln n1 − n2 ln n2
n1 n2
= − n1 ln + n2 ln
N N
1−x 1−x 1+x 1+x
= −N ln + ln
2 2 2 2
1+x 2 1−x 2
S(E) = NkB ln + ln
2 1+x 2 1−x
(ii) Start with
1−x 1−x 1+x 1+x
S = −NkB ln + ln
2 2 2 2
1 ∂S ∂S ∂x
= = ×
T ∂E ∂x ∂E
Let
1±x 1±x
f (x) = ln
2 2
df 1±x 2 1±x 1
= (±1) + ln ±
dx 2 1±x 2 2
1 1±x
= ±1 ± ln
2 2
From the above expression for the derivative of f (x) with respect to x,
it is clear,
∂S NkB 1+x
= ln
∂x 2 1−x
∂x 1
=
∂E Nǫ
1 1 1−x
Therefore, β = = ln .
kB T 2ǫ 1+x
220 13. SELECTED PROBLEMS AND SOLUTIONS
PROBLEM - 8
SOLUTION
Let nk denote the number of particles in the quantum state labelled by the
index k. Let ǫk denote the energy of the quantum state k. We set,
xk = exp [−β (ǫk − µ)]
We have,
P1
n
n=0 n xk
P1 n
F ermi − Dirac
n=0 xk
P∞ xnk
n
hnk i = n=0
n!
Maxwell − Boltzmann
P∞ xn k
n=0
n!
P∞ n
n=0 n xk
P Bose − Einstein
∞ n
n=0 xk
Compare the above with the formal expression
X
hnk i = nP (n).
n
Fermi-Dirac statistics
xnk xnk
P (n) = P1 = (n = 0, 1)
m=0 xm
k 1 + xk
221
It is given that the probability that there are no particles in quantum state k
is 0.001. In other words
Maxwell-Boltzmann statistics
xnk 1 xnk
P (n) = m = exp(−xk )
n! P∞ xk n!
m=0
m!
We have
ζ = hnk i = xk
ζ n exp(−ζ)
P (n; ζ) =
n!
P (n = 0; ζ) = exp(−ζ) = 0.001
Bose-Einstein statistics
xn
P (n) = P∞ k
m=0 xm
k
222 13. SELECTED PROBLEMS AND SOLUTIONS
Therefore
ζ
xk =
1+ζ
ζn
P (n; ζ) =
(1 + ζ)n+1
It is given that
P (n = 0; ζ) = 0.001
1
= 0.001
1+ζ
1 + ζ = 1000
ζ = 999
Thus we have
0.999 F ermi − Dirac
ζ = hnk i = 6.908 Maxwell − Boltzmann
999 Bose − Einstein
PROBLEM - 9
(iii) Obtain U(T ) in the classical limit : kB T >> ~ω0 , and compare it
with the corresponding result for a system of regular quantum harmonic
oscillators.
(v) Obtain S(T ) in the classical limit and compare it with with the corre-
sponding result for a system of regular quantum harmonic
oscillators.
SOLUTION
1
ǫn = n+ ~ω0 : n = 1, 3, 5, · · ·
2
1
= 2n + 1 + ~ω0 : n = 0, 1, 2, · · ·
2
3
= 2n + ~ω0 : n = 0, 1, 2, , · · ·
2
224 13. SELECTED PROBLEMS AND SOLUTIONS
(i)
∞
X
3
Q1 = exp −β 2n + ~ω0
n=0 2
∞
X
= exp (−3β~ω0/2) [exp(−2β~ω0 )]n
n=0
(ii)
3Nβ~ω0
ln QN = − − N ln [1 − exp(−2β~ω0 )]
2
∂ ln QN
U = −
∂β
3N~ω0 2N~ω0 exp(−2β~ω0 )
= +
2 1 − exp(−2β~ω0 )
3N~ω0 2N~ω0
= +
2 exp(2β~ω0) − 1
~ω0
<< 1.
kB T
exp(2β~ω0) = 1 + 2β~ω0.
225
Therefore
3N~ω0 2N~ω0
U ∼ +
2 2β~ω0
3N~ω0
∼ + NkB T
2
∼ NkB T
∞
X
1
Q1 = exp −β n + ~ω0
n=0 2
βN~ω0
ln QN = − − N ln[1 − exp(−β~ω0 )]
2
∂ ln QN
U = −
∂β
N~ω0 N~ω0
= +
2 exp(β~ω0) − 1
∼
~ω0 <<kB T NkB T
In the classical limit the internal energy is the same for both the Odd-
Harmonic-Oscillator and the regular quantum harmonic oscillator.
F = −kB T ln QN
3N~ω0
= + NkB T ln[1 − exp(−2β~ω0 )]
2
226 13. SELECTED PROBLEMS AND SOLUTIONS
2N~ω0 /T
= − NkB ln[1 − exp(−2β~ω0)]
exp(2β~ω0) − 1
~ω0
Let x = . Then
kB T
S 2x
= − ln[1 − exp(−2x)] = 1 − ln(2x)
NkB exp(2x) − 1
= 1 − ln 2 − lnx = − ln x
" #
~ω0
= − ln
kB T
!
~ω0
S ∼ −NkB ln
kB T
!
kB T
∼ NkB ln
~ω0
N~ω0 N~ω0
U = +
2 exp(β~ω0 ) − 1
TS = U − F
N~ω0
= − NkB T ln[1 − exp(−β~ω0 )]
exp(β~ω0) − 1
227
~ω0
Let x = . Then,
kB T
S x
= − ln[1 − exp(−x)]
NkB exp(x) − 1
∼
x→0 1 − ln(x) ∼ − ln x = ln(1/x)
!
kB T
S ∼ NkB ln
~ω0
In the classical limit the entropy is the same for both the Odd-Harmonic-
Oscillator and the regular harmonic oscillator.
PROBLEM - 10
Show that theX
entropy of ideal quantum particles can be expressed as
(i) SB = −kB [hni i lnhni i + hni + 1i lnhni + 1i] for Bosons
i
X
(ii) SB = −kB [hni i lnhni i + h1 − ni i lnh1 − ni i] for Fermions
i
SOLUTION
Let pi,n denote the probability that the i-th quantum state contains n
particles. Formally the entropy is given by
XX
S = −kB pi,n ln(pi,n ).
i n
ζin
(i) For Bosons, we have n = 0, 1, · · · and pi,n = , where ζi =
(1 + ζi )n+1
228 13. SELECTED PROBLEMS AND SOLUTIONS
hni i. Therefore,
∞
XX
S = −kB pi,n [n ln ζi − (n + 1) ln(ζi + 1)]
i n=0
X X X X
= −kB ln ζi npi,n + kB ln(ζi + 1) (n + 1)pi,n
i n i n
X
= −kB [ζi ln ζi − (ζi + 1) ln(ζi + 1)]
i
X
= −kB [hni i lnhni i − hni + 1i lnhni + 1i]
i
(ii) For Fermions we have n = 0 or 1 for all i. Also pi,0 = 1 − ζi and pi,1 = ζi ,
where ζi = hni i. Therefore,
X
S = −kB [ζi ln ζi − (1 − ζi ) ln(1 − ζi)]
i
PROBLEM - 11
Consider a closed system of non interacting particles occupying ground state
(of energy zero) and an excited state (of energy ǫ = kB ln(2) Joules). The two
energy levels are non-degenerate. What is the temperature below which more
than two-thirds of the total number of particles shall be in the ground state7 .
(kB = 1.381 × 10−23 ;
SOLUTION
The single-particle canonical partition function is given by Q = 1+exp(−βǫ).
Let P0 be the probability of occupation of the ground state and Pǫ be that of
the excited state. We have, for a canonical ensemble,
1 exp(−βǫ)
P0 = ; Pǫ =
1 + exp(−βǫ) 1 + exp(−βǫ)
7
Hint : Employ canonical ensemble formalism : the occupation probability of an energy
level is proportional to its Boltzmann weight.
229
We need the temperature below which more than two-thirds of the particles
occupy the ground state.
2 1 2 3
P0 > ⇒ > ⇒ 1 + exp(−βǫ) <
3 1 + exp(−βǫ) 3 2
1
exp(−βǫ) <
2
−βǫ < − ln 2
βǫ > ln 2
ln 2
β >
ǫ
ǫ
kB T <
ln 2
kB ln 2
kB T <
ln2
T < 1 Kelvin
For T < 1 Kelvin, more than two-thirds of the particles shall be in the
ground state.
PROBLEM - 12
Consider the distribution
b N!
Ω(n1 , n2 , n3 , n4 , n5 , n6 ) = Q6
i=1 ni !
SOLUTION
230 13. SELECTED PROBLEMS AND SOLUTIONS
ln ni = −λ
ni = exp(−λ)
N
exp(−λ) =
6
Thus we find n∗i = N/6 ∀ i = 1, 6.
(ii) Let us now determine the average value of ni . The probability that the
throw of a die will result in the number i is p = 1/6. The probability
it will not result in i is q = 1 − p = 5/6. Let n denote the number of
times i occurs in a throw of N independent dice. The probability of the
random variable n is given by
N!
P (n; N) = pn q N −n
n!(N − n)!
231
PROBLEM - 13
The internal energy U of a single component thermodynamic system expressed
as a function of entropy S, volume V , and number of particles N is of the form
U(S, V, N) = a S 4/3 V α where a and α are constants.
(i) What is the value of α ?
(ii) What is the temperature of the system ?
(iii) What is the pressure of the system ?
(iv) The pressure of the system obeys a relation given by P = ωU/V , where
ω is a constant. Find the value of ω.
(v) if the energy of the system is held constant, the pressure and volume are
related by P V γ = constant. Find γ.
SOLUTION
(i)
U(S, V ) = a S 4/3 V α
U, S, and V are extensive properties of a thermodynamic system. U
is an extensive function of S and V . In other words U is a first order
homogeneous function of S and V .
U(λS, λV ) = λ U(S, V )
232 13. SELECTED PROBLEMS AND SOLUTIONS
We have,
λα + (4/3) = λ
4
α+ = 1
3
1
α = −
3
! 1/3
∂U 4a S
T (S, V ) = =
∂S V
3 V
! 4/3
∂U a S
P (S, V ) = − =
∂V S
3 V
(iv) We have,
aS 4/3
U =
V 1/3
aS 4/3 = UV 1/3
233
aS 4/3
P =
3V 4/3
UV 1/3
=
3V 4/3
1U
=
3V
Therefore ω = 1/3.
(v) We have
aS 4/3 aS 4/3 U
U= 1/3
; P = 4/3
; = 3V
V 3V P
Therefore γ = 1.
PROBLEM - 14
Consider a closed system of two non-interacting particles at temperature T ,
occupying the following three energy levels. The ground state is of energy
zero; the first excited state is of energy ǫ and the second excited state is of
energy 2ǫ. The ground state is doubly degenerate. The two excited states are
non-degenerate. Determine
(i) the canonical partition function of the system and
SOLUTION
Let n1 and n2 denote the number of particles in first and second quantum
states of the zero energy level; let n3 and n4 denote the number of particles in
the first and second excited states of energy ǫ and 2ǫ respectively. Thus we have
P
several possible strings of numbers : {n1 , n2 , n3 , n4 }. We have 4i=1 ni = N,
where N is the total number of particles in the system. For the present problem
\ 1 , n2 , n3 , n4 , N) denote the number of microstates of the
N = 2. Let Omega(n
system of N non-interacting particles occupying the energy levels. The energy
of a micro state belonging to the string {n1 , n2 , n3 , n4 } is
E(n1 , n2 , n3 , n4 ) = n3 ǫ + n4 (2ǫ) = (n3 + 2n4 ).ǫ
The canonical partition function of the system is formally given by
X
Q = b exp [−βǫ(n + 2n )]
Ω 3 4
{n1 ,n2 ,n3 ,n4 }
We also have,
N!
Q Classical distinguishable
i ni !
Q1
Maxwell − Boltzmann
b
Ω(n1 , n2 , n3 , n4 , N) =
i ni !
⋆
1
Fermi − Dirac
1 Bose − Einstein
⋆
⇒ ni ≤ 0 ∀ i (Pauli Exclusion)
For the problem with N = 2, we can explicitly enumerate all possible strings,
see the table.
The canonical partition function for the four statistics can be easily written
from the table. For convenience we denote ω = exp(−βǫ). We have,
• Classical Distinguishable Particles :
QCL = 1 + 1 + ω 2 + ω 4 + 2ω 2 + 2ω + 2 + 2ω 2 + 2ω + 2ω 3
= 4 + 4ω + 5ω 2 + 2ω 3 + ω 4
235
Q1 = 1 + 1 + ω + ω 2
= 2 + ω + ω2
Q = [Q1 ]2
= (2 + ω + ω 2 )2
= 4 + ω 2 + ω 4 + 4ω + 2ω 3 + 4ω 2
= 4 + 4ω + 5ω 2 + 2ω 3 + ω 4
236 13. SELECTED PROBLEMS AND SOLUTIONS
n1 n2 n3 n4 b
Ω b
Ω b
Ω b
Ω E
C MB FD BE
2 0 0 0 1 1/2 * 1 0
0 2 0 0 1 1/2 * 1 0
0 0 2 0 1 1/2 * 1 2ǫ
0 0 0 2 1 1/2 * 1 4ǫ
1 0 0 1 2 1 1 1 2ǫ
1 0 1 0 2 1 1 1 ǫ
1 1 0 0 2 1 1 1 0
0 1 0 1 2 1 1 1 2ǫ
0 1 1 0 2 1 1 1 ǫ
0 0 1 1 2 1 1 1 3ǫ
• Maxwell-Boltzmann Statistics
b is exactly one-half for the Maxwell-Boltzmann statistics.
The factor Ω
237
Also, we have
[Q1 ]2
QM B = .
2!
Hence,
5 1
QM B = 2 + 2ω + ω 2 + ω 3 + ω 4
2 2
• Fermi-Dirac Statistics
From the table we can read out
QF D = ω 2 + ω + 1 + ω 2 + ω + ω 3
= 1 + 2ω + 2ω 2 + ω 3
• Bose-Einstein Statistics
From the table we can read out
QBE = 1 + 1 + ω 2 + ω 4 + ω 2 + ω + 1 + ω 2 + ω + ω 3
= 3 + 2ω + 3ω 2 + ω 3 + ω 4
PROBLEM - 15
A closed system consists of N classical non-interacting an-harmonic oscillators
in equilibrium at temperature T . The energy of a single oscillator is given by
p2
E = + bq 2n where n ≥ 2 is an integer and b is a constant.
2m
∂hEi n+1
C= = NkB .
∂T 2n
SOLUTION
(i)
!
1 Z +∞ p2 Z +∞
Q1 = dp exp −β dq exp −βbq 2n
h −∞ 2m −∞
Z !
+∞ p2
Ip = dp exp −β
−∞ 2m
q
= 2πmkB T
Z +∞
Iq = dq exp −βbq 2n
−∞
Z +∞
= 2 dq exp −βbq 2n
0
1 1 Z ∞ ( 1 −1)
βbq 2n = x; Iq = (βb)− 2n x 2n exp(−x)dx
n 0
1
− 2n 1 1
= (βb) Γ
n 2n
1 1 1 1
= (b)− 2n Γ (β)− 2n
n 2n
239
Therefore we have,
n+1
Q1 = (·) β − 2n
QN = [Q1 ]N
n+1
ln QN = N ln(·) − N ln β
2n
∂ ln QN
U = −
∂β
n+1
= NkB T
2n
∂U n+1
C= = NkB
∂T 2n
PROBLEM - 16
Let i = 1, 2, · · · denote states with corresponding energies ǫi .
Consider a system, called q-system occupying these states with proba-
bilities qi . The average energy of the q-system is given by
X
U= qi ǫi .
i
Consider a canonical system which occupy the same states with proba-
bilities
1
pi = exp(−βǫi )
Q
X
Q = exp(−βǫi )
i
240 13. SELECTED PROBLEMS AND SOLUTIONS
such that
X X
pi ǫi = qi ǫi = U.
i i
In other words, the average energy of the q-system is the same as the av-
erage energy the canonical system. The entropy of the canonical system
is given by
X
S = −kB pi ln pi .
i
X
Sq − S = −kB qi ln(qi /pi ).
i
SOLUTION
241
(i)
X
Sq − S = −kB qi ln qi − S (13.1)
i
X U U
= −kB qi ln qi −
+ −S (13.2)
i T T
X U U − TS
= −kB qi ln qi − + (13.3)
i T T
X 1 F
= −kB qi ln qi − U + (13.4)
i T T
X
= −kB qi ln qi − βkB U − kB ln Q (13.5)
i
X X X
= −kB qi ln qi − βkB qi ǫi − qi kB ln Q
i i i
(13.6)
X X
= −kB qi ln qi + kB qi [−βǫi − ln Q] (13.7)
i i
X X
= −kB qi ln qi + kB qi ln pi (13.8)
i i
!
X qi
= −kB qi ln (13.9)
i pi
!
X X
≤ kB pi − qi
i i
≤ 0
Sq ≤ S
S ≥ Sq
Q.E.D
PROBLEM - 17
A macroscopic system can be in any one of the two energy states 1 and 2, with
probabilities p1 and p2 respectively. The degeneracy of the state 1 is g1 and
that of state 2 is g2 . Show that entropy of the system is given by
2
X 2
X
S = −kB pi ln pi + pi [kB ln gi ]
i=1 i=1
SOLUTION
We start with Boltzmann-Gibbs-Shannon entropy
X
S = −kB pi ln pi
i
243
We have g1 micro states each with a probability of p1 /g1 and g2 micro states
each with a probability of p2 /g2 . Hence
g1 ! ! g2 ! !
X p1 p1 X p2 p2
S = −kB ln − kB ln
i=1 g1 g1 i=1 g2 g2
! !
p1 p2
= −kB p1 ln − kB p2 ln
g1 g2
2
X 2
X
= −kB pi ln pi + pi [kB ln gi ]
i=1 i=1
PROBLEM - 18
Consider a closed system of N non-interacting, identical, distinguishable, quan-
tum oscillators. The system is in thermal equilibrium at temperature T . The
energy levels of each oscillator are given by,
1
ǫn = n + ~ω0 n = 0, 2, 4, · · ·
2
(iii) Obtain U(T ) in the classical limit : kB T >> ~ω0 , and compare it
with the corresponding result for a system of regular quantum harmonic
oscillators.
(v) Obtain S(T ) in the classical limit and compare it with with the corre-
sponding result for a system of regular quantum harmonic oscillators.
244 13. SELECTED PROBLEMS AND SOLUTIONS
SOLUTION
1
ǫn = n+ ~ω0 : n = 0, 2, 4, · · ·
2
1
= 2n + ~ω0 : n = 0, 1, 2, · · ·
2
1
= 2n + ~ω0 : n = 0, 1, 2, , · · ·
2
(i)
∞
X
1
Q1 = exp −β 2n + ~ω0
n=0 2
∞
X
= exp (−β~ω0 /2) [exp(−2β~ω0 )]n
n=0
(ii)
Nβ~ω0
ln QN = − − N ln [1 − exp(−2β~ω0)]
2
∂ ln QN
U = −
∂β
N~ω0 2N~ω0 exp(−2β~ω0 )
= +
2 1 − exp(−2β~ω0 )
N~ω0 2N~ω0
= +
2 exp(2β~ω0 ) − 1
~ω0
<< 1.
kB T
exp(2β~ω0) = 1 + 2β~ω0.
Therefore
N~ω0 2N~ω0
U ∼ +
2 2β~ω0
N~ω0
∼ + NkB T
2
∼ NkB T
246 13. SELECTED PROBLEMS AND SOLUTIONS
βN~ω0
ln QN = − − N ln[1 − exp(−β~ω0 )]
2
∂ ln QN
U = −
∂β
N~ω0 N~ω0
= +
2 exp(β~ω0) − 1
∼
~ω0 <<kB T NkB T
In the classical limit the internal energy is the same for both the Even-
Harmonic-Oscillator and the regular quantum harmonic oscillator.
(iii) Free energy for the Even-Harmonic-Oscillator is given by,
F = −kB T ln QN
3N~ω0
= + NkB T ln[1 − exp(−2β~ω0 )]
2
Therefore entropy is given by
U −F
S =
T
2N~ω0 /T
= − NkB ln[1 − exp(−2β~ω0)]
exp(2β~ω0) − 1
Then
S 2x
= − ln[1 − exp(−2x)]
NkB exp(2x) − 1
∼
x→0 1 − ln(2x)
= 1 − ln 2 − lnx = − ln x
" #
~ω0
= − ln
kB T
!
~ω0
S ∼ −NkB ln
kB T
!
kB T
∼ NkB ln
~ω0
N~ω0 N~ω0
U = +
2 exp(β~ω0 ) − 1
TS = U − F
N~ω0
= − NkB T ln[1 − exp(−β~ω0 )]
exp(β~ω0) − 1
~ω0
Let x = . Then,
kB T
S x
= − ln[1 − exp(−x)]
NkB exp(x) − 1
∼
x→0 1 − ln(x) ∼ − ln x = ln(1/x)
!
kB T
S ∼ NkB ln
~ω0
248 13. SELECTED PROBLEMS AND SOLUTIONS
In the classical limit the entropy is the same for both the Even-Harmonic-
Oscillator and the regular harmonic oscillator.
PROBLEM - 19
A zip has N links. Each link can be in any one of the two states : (a) a closed
state with zero energy and (b) an open state with energy ǫ > 0. The zip can
be un-zipped from top to bottom. A link is open only if all the links above it
are all open. In other words, if we number the links as 1, 2, · · · N from top to
bottom, then link k can be open if and only if all the links from 1 to k − 1 are
also open. Let 0 ≤ n ≤ N be a random variable denoting the number of open
links.
Derive an expression for the canonical partition function of the zip system.
Derive an expression for hni, where the angular brackets denote averaging over
a canonical ensemble of zip systems. Show that at low temperatures, hni is
independent of N.
SOLUTION
Formally we have,
∂ ln Q ∂ ln Q ∂x
hEi = hniǫ = − =− ×
∂β ∂x ∂β
249
1 N +1
= − −(N +1)
x−1 −1 x −1
1 N +1
= −
exp(βǫ) − 1 exp[β(N + 1)ǫ] − 1
For large N, we have
1
hni = − (N + 1) exp[−β(N + 1)ǫ].
exp(βǫ) − 1
hni depends on N.
However, at low temperatures (β → ∞), the second term goes to zero and
we find that hni = exp(−βǫ), independent of N.
PROBLEM - 20
Consider a closed system of THREE non-interacting particles at temperature
T occupying
• a non-degenerate ground state of energy zero and
n0 n1 n2 n3 b
Ω E
3 0 0 0 1 0
0 3 0 0 1 3ǫ
0 0 3 0 1 3ǫ
0 0 0 3 1 3ǫ
2 1 0 0 3 ǫ
2 0 1 0 3 ǫ
2 0 0 1 3 ǫ
0 2 1 0 3 3ǫ
0 2 0 1 3 3ǫ
0 0 2 1 3 3ǫ
1 2 0 0 3 2ǫ
1 0 2 0 3 2ǫ
1 0 0 2 3 2ǫ
0 1 2 0 3 3ǫ
0 1 0 2 3 3ǫ
0 0 1 2 3 3ǫ
1 1 1 0 6 2ǫ
1 1 0 1 6 2ǫ
1 0 1 1 6 2ǫ
0 1 1 1 6 3ǫ
Table-1 lists the strings, and the energy of a micro state associated with the string. The
partition function is given by
Alternately,
the single - particle partition function is given by
Q1 = 1 + 3 exp(−βǫ).
Q = Q31 = [1 + 3 exp(−βǫ)]3
b= 1
Ω .
n0 ! n1 ! n2 ! n2 !
We get,
Table 2 : Occupation numbers for particles obeying Maxwell-Boltzmann
statistics
n0 n1 n2 n3 b
Ω E
3 0 0 0 1/6 0
0 3 0 0 1/6 3ǫ
0 0 3 0 1/6 3ǫ
0 0 0 3 1/6 3ǫ
2 1 0 0 1/2 ǫ
2 0 1 0 1/2 ǫ
2 0 0 1 1/2 ǫ
0 2 1 0 1/2 3ǫ
0 2 0 1 1/2 3ǫ
0 0 2 1 1/2 3ǫ
1 2 0 0 1/2 2ǫ
1 0 2 0 1/2 2ǫ
1 0 0 2 1/2 2ǫ
0 1 2 0 1/2 3ǫ
0 1 0 2 1/2 3ǫ
0 0 1 2 1/2 3ǫ
1 1 1 0 1 2ǫ
1 1 0 1 1 2ǫ
1 0 1 1 1 2ǫ
0 1 1 1 1 3ǫ
252 13. SELECTED PROBLEMS AND SOLUTIONS
1 9 9 3
= + exp(−3βǫ) + exp(−2βǫ) + exp(−βǫ)
6 2 2 2
particles obeying Bose-Einstein statistics
b 0 , n1 , n2 , n3 ) = 1 for all allowed strings. We have
For Bosons, Ω(n
Table 3 : Occupation numbers for Bosons
n0 n1 n2 n3 b
Ω E
3 0 0 0 1 0
0 3 0 0 1 3ǫ
0 0 3 0 1 3ǫ
0 0 0 3 1 3ǫ
2 1 0 0 1 ǫ
2 0 1 0 1 ǫ
2 0 0 1 1 ǫ
0 2 1 0 1 3ǫ
0 2 0 1 1 3ǫ
0 0 2 1 1 3ǫ
1 2 0 0 1 2ǫ
1 0 2 0 1 2ǫ
1 0 0 2 1 2ǫ
0 1 2 0 1 3ǫ
0 1 0 2 1 3ǫ
0 0 1 2 1 3ǫ
1 1 1 0 1 2ǫ
1 1 0 1 1 2ǫ
1 0 1 1 1 2ǫ
0 1 1 1 1 3ǫ
n0 n1 n2 n3 b
Ω E
1 1 1 0 1 2ǫ
1 1 0 1 1 2ǫ
1 0 1 1 1 2ǫ
0 1 1 1 1 3ǫ
PROBLEM - 21
(i) Derive expressions for the micro canonical temperature8 . and micro
canonical pressure9 of the system.
8
temperature as a function of U, V, and N .
9
pressure as a function of U, V, and N .
10
Free energy F as a function of T , V and N .
254 13. SELECTED PROBLEMS AND SOLUTIONS
SOLUTION
Temperature
U V 1
S = NkB ln 2
N kB T0 v0
∂S 1 NkB
= =
∂U T U
U
T =
NkB
Pressure
∂S P NkB
= =
∂V T V
NkB T U
P = =
V V
Free Energy
From Legendre Transform
F = U − TS
U = NkB T
TV
S = NkB ln
NT0 v0
TV
F = NkB T − NkB T ln
NT0 v0
F = µN − P V
µ ∂S TV
= − = −kB ln + 2kB
T ∂N NT0 v0
TV
µ = 2kB T − kB T ln
NT0 v0
TV
F = NkB T − NkB T ln
NT0 v0
G = U − TS + PV
U UV
= U− NkB ln 2
+U
NkB N kB T0 v0
UV
= 2U − U ln 2
N kB T0 v0
" #
kB T 2
= 2NkB T − NkB T ln
P T0 v0
G = µN
" #
kB T 2
= 2NkB T − NkB T ln
P T0 v0
Index
adiabatic process, 197, 198 118, 125, 126, 128, 129, 135,
Avijit Lahiri, 10 144, 145, 149, 162, 163, 202,
209, 228
Balescu, R, 8, 71 Chowdhury, Debashish, 8
Bernouilli numbers, 188 Clausius, 1, 89
Bernoulli, Daniel, 43 closed system, 69, 71–73, 80, 81, 87,
Binomial distribution, 27, 28, 31, 32, 90, 91, 175, 228
138 cumulant, 38, 40
Boltzmann entropy, 47, 101 cumulant generating function, 38
Boltzmann, Ludwig Eduard, 1, 2, 15,
20, 21, 66, 75, 101, 122, 127 Debye frequency, 185
Boltzmann-Gibbs-Shannon entropy, 2, Debye temperature, 186
90, 91 Debye’s theory of specific heat, 185
Bose-Einstein condensation, 133, 135, delta function, 48, 52
150, 154, 161–163, 228 density of states, 56, 73, 77, 104, 118,
Bose-Einstein statistics, 124, 125, 127– 145
130, 132, 135, 139, 142, 202, determinism, 4
227, 229 Dittman, R. H., 13
Bosons, 104, 122, 130, 133, 135, 143, Dugdale, J. S., 13
144, 155, 157, 163, 228 Dulong and Petit’s law, 180
Brownian motion, 113 Einstein temperature, 184
Einstein’s theory of specific heat, 184
Callen, H B, 8, 11, 69
Einstein, Albert, 1, 113
canonical ensemble, 47, 69, 73, 79, 90,
ensemble, 21–23
91, 103, 104, 112, 144
Entropy : Statistical, 2
canonical partition function, 71–73, 75–
Entropy : Thermodynamics, 2
77, 79, 103, 104, 118, 183, 228,
equi-partition theorem, 161
230
ergodicity, 18, 100
Central Limit Theorem, 39
Euler theorem, 106
Chandler, D, 9
event, 18, 19
characteristic function, 38–40
chemical potential, 62, 79, 104, 113, factorial moment, 28, 140
256
INDEX 257
Wark, J, 11
Zamansky, M. W., 13