Sei sulla pagina 1di 49

Statistical Quantum Mechanics

Mukul Agrawal September 10, 2003

Electrical Engineering, Stanford University, Stanford, CA 94305

Contents

I Basic Formulations for Study of Statistical Equilibrium


1 2 3 Conventions Fundamental Assumption of Statistical Mechanics Thermodynamic State Variables and State Equations
3.1 State Variables and Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 3.1.2 3.1.3 3.1.4 3.1.5 3.2 3.3 3.4 3.5 3.6 Exhaustive set of state variables? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4
4 4 8
8 8 11 12 13 13 14 17 18 19 19

Are state variables dened only in equilibrium?

Concept of local equilibrium . . . . . . . . . . . . . . . . . . . . . . . Concept of instantaneous equilibrium - reversible processes . . . . . . Local state variables and local equilibrium in quantum systems? . . . . . . . . . . . . . . . . . . . . . . . .

Pair of Extensive/Intensive Properties Fundamental Relation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Thermodynamic Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Euler's Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Various Denitions of Intensive Properties

CONTENTS

Heat
4.1 4.2 Heat Transfer in Reversible Process . . . . . . . . . . . . . . . . . . . . . . . Heat Transfer in Irreversible Process . . . . . . . . . . . . . . . . . . . . . .

19
21 23

5 6

Thermal Equilibrium Intuitive Understanding of Some of the Intensive Properties


6.1 6.2 6.3 6.4 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chemical Potential Fermi Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24 25
26 26 28 29

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

II Special Methods for Calculating State Variables


7 Partition Function/Gibbs Sum/Grand Sum
7.1 7.2 Fermi Gas/Fermi-Dirac distribution . . . . . . . . . . . . . . . . . . . . . . . Bose Gas/Boson Distribution 7.2.1 7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32
33
34 34 34 35

Photon Gas/Planck's Distribution

Ideal Gas/Boltzmann Distribution . . . . . . . . . . . . . . . . . . . . . . . .

III Thermodynamics from Quantum Statistical View Point


8 9 First Law Second Law
9.1 9.2 9.3 Reversible/ Irreversible Processes - Stray Comments . . . . . . . . . . . . . . Work and heat in irreversible process . . . . . . . . . . . . . . . . . . . . . . Equivalent Heat Engine Statements . . . . . . . . . . . . . . . . . . . . . . .

39
39 40
40 41 41

10 Variations of Second Law


10.1 Entropy Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42
42 42 42 43 43

10.2 Energy Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Helmholtz Free Energy Minimization . . . . . . . . . . . . . . . . . . . . . .

10.4 Enthalpy Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Gibbs Free Energy minimization . . . . . . . . . . . . . . . . . . . . . . . . .

Mukul Agrawal

2
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

CONTENTS

11 Third Law 12 Fluctuation Dissipation Theorem

44 44

IV Further Resources
13 Reading References

44
45 46

Mukul Agrawal

3
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

Part I

Basic Formulations for Study of Statistical Equilibrium


1

Conventions
In this document, I always refer to the system that we are studying as S and rest of the entire universe with which S is interacting as R. Union of R and S is referred to as R+S. R+S is supposed to be a closed system unless specied otherwise.

I would take work on the system and energy ow into the system to be positive. Total derivatives of are written as

state variables

can be found. Innitesimal change of state variable

d or

whereas innitesimal change in non-state variables are written as

Fundamental Assumption of Statistical Mechanics

Consider a reservoir R and a system S. Basically what I want to say is that R and S are subsystems of R+S such that R is much bigger than S. Thermodynamic reasoning are only applicable when R is 'thermodynamically' macroscopic system  that is when R consists of large number of microscopic particles. System S can be microscopically small.

We can talk

about thermal equilibrium only when there is at least one macroscopic object involved. If there are two microscopic objects which can interact with each other but are
closed from any outside interactions from any macroscopic objects, then there is no point applying thermodynamics to such a system and one can not dene or even talk about thermal equilibrium between such two objects. The rst important point that one should appreciate is that, since we are only concerned with probabilities in statistical mechanics, we can treat 'quantum mechanical uncertainties' and 'statistical uncertainties' on equal footing. So, in general, the complete system R+S can be either 1) a closed system in some quantum mechanical linear superposition of energy eigen states such that the quantum mechanical expectation value of energy is

U0

or 2) it might

be an open system and thus it might be switching from one energy state to another because

Mukul Agrawal

4
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

2: Fundamental Assumption of Statistical Mechanics

of external inuence such that it maintains a statistical average energy of

U0 .

Whatever

might be the details  one can consider an ensemble of systems in all dierent possible states and then one can talk about the ensemble average of energy and one can talk about the probability of nding the system in a state of any particular energy

U.

Basically what I

want to say is that one can treat the quantum mechanical ensemble averaging (I know the exact state of system  still I can't tell what its energy is) and the statistical ensemble averaging (I don't even know what the state of the each of the system in the ensemble is) on equal footing. We can just talk about the averages and we don't worry about what the reason is  whether statistical or quantum. When we make some macroscopic measurements we obtain some kind of averaged out behavior of the system. We do not get microscopic state information. More over we cannot distinguish whether what we got is statistical average or is it quantum mechanical average . Now consider a

closed system R+S having a constant average energy of

U0 .

Now this

system, in general, can have many degenerate states with denite energy

U0 .

We now assert

that the probability that this complete system would be found in any of these degenerate states is same whereas the probability that the system would be found in any other energy state is zero. This is called the FUNDAMENTAL ASSUMPTION of statistical mechanics. Two important concerns might be :-

Why is it so ? The simple reason is that because no 'ideally closed system' exist in this universe. So called 'closed system' R+S would still be interacting with its surroundings although it would be exchanging exceedingly small amount of energy. exchange is almost zero we can call it a closed system. Since energy

But since interaction with

external world is nite, system R+S is being continuously perturbed and hence being thrown form one degenerate state of energy

U0

to another state of same energy

U0 .

Moreover since the perturbation is completely random, we have no reason to expect that it would be thrown into one of the degenerate state with more probability than any other. That's what is the fundamental assumption.

1 To be theoretically accurate, one should use a concept of density matrices in place of wavefunctions
to accurately describe quantum statistical behavior of a system. I prefer to avoid density matrices in this introductory article. Please refer to a separate article on density matrices and their use in statistical quantum mechanics for more details. I would be able to avoid density matrices in this article because in this article we would only be interested in various probabilities of nding the system in various energy eigen states. Moreover I would use the set of energy eigen states as the basis set. Hence only diagonal elements of the density matrix are of interest to us in this article. So I do not need to carry on the matrix formulations and notations all along. This makes the mathematics somewhat simpler and easy to grasp.

Mukul Agrawal

5
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

2: Fundamental Assumption of Statistical Mechanics

What's the importance ? Read the following and you would understand. Using fundamental assumption we can predict all sorts of macroscopic properties of the system starting form the microscopic laws of the world.

When, due to the external perturbations, the system R+S 'swims' among the dierent degenerate states of total energy of possible values, say

U0 ,

the energy of the system S can take ALL sorts of

U.

(Accordingly, system R would have energy

U0 U ) We can talk about

energy of R and S separately because we are analyzing

non-interacting particles2 . Now

U . In general the energy state of S with energy U might be degenerate with a degeneracy of GS . If S is found to be in one of the states of denite energy U then the reservoir R should be in one of the states of denite energy U0 U . Which again might be degenerate with a degeneracy of GR . So the probability of nding S in any one particular state of denite energy U would be proportional to GR (using fundamental assumption). And, hence, the total probability of nding the system S in any of the states of energy U would be proportional to GR GS .
let us compute what is the probability of nding the system S in an energy state of We

dene the thermal equilibrium to exist between microscopic S and macroscopic R


U

when probability of nding S in any one particular state of denite energy to

is proportional

GR .

Hence the probabilities of nding S in any one of the degenerate states of energy

GR (U0 ) GR (U0 U ) = const. Hence the ratio of probability of nding system S in any state with energy U1 and probability of nding system S in any state with energy U2 is simply GS (U1 )/GS (U2 ). So we are claiming that when U
are equal. Since R is macroscopic,

system S is in equilibrium then system should be able to oat in all its energy states in a completely random fashion. Such a state of the the system is dened to be thermal equilibrium state. When system S oats among certain states in certain preferential fashion then system S is out of thermal equilibrium.
Correct way visualizing the statistical probabilities that we are talking about is to imagine an ensemble of exactly similar systems. Measure energy of the system S in each copy of

the ensemble and calculate the probabilities of the outcome. Very often (that is for ergodic systems) this argument can be converted into time scale. What actually happens is that R+S randomly swims among all degenerate states. Correspondingly S randomly oats among all possible energy states

U.

We can measure energy of the same copy of the system S many

times and calculate its probabilities. We need to make sure that the time gap between each measurement is suciently long enough so that system S can oat randomly between all

2 For example, if particles were electrically charged then we would also have an electrostatic interaction
energy between two systems and we would not be able to separate energies into two parts.

Mukul Agrawal

6
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

2: Fundamental Assumption of Statistical Mechanics

possible states. As a special case, when the number of particles in both R and S are macroscopically huge in that case

GR GS

for some particular

will become huge . This is the general numerical

observation. So its most probable to nd S in some particular state of energy accordingly R in some state of energy S in any of the degenerate states of

(say

Ueq ) and

U0 U . We can also see that the probabilities of nding energy U (which is true for Ueq as well) are equal. Also

it is highly unlikely to nd the system S in any other state of some other energy. Because of this mathematical twist only, thermodynamics is such an important subject for macroscopic bodies. This number is so huge in comparison to others that unless you are very lucky you are not supposed to experience any system which is not in its most probable state in your entire life. We

dene the thermal equilibrium

to exist between

macroscopic R and S

(for microscopic S, see above denition) when the ensemble average energy is virtually equal to energy of most probable congurations of the ensemble (so averaging basically becomes meaningless). What actually happens is that R+S randomly swims among all degenerate states. Correspondingly S oats among all possible energy states. But the number of system congurations in which S is having energy

and R having

U0 U

are so huge that most of

the time system oats among this energy only. The system initially might not be in such a state. And it might take some time to 'swim' into such state. When this state is reached system is said to be in thermal equilibrium. (In this paragraph time averaging and ensemble averaging is being confused, as many of the books frequently do, which actually is not that easy to justify. Read ergodicity in any statistical mechanics book if you are more interested). Above discussion shows that the degeneracy is quite an important property of the state because 'most often' world is found in those states for which

GR GS

is maximum.

GR

and

GS are usually such huge numbers that it is = ln(G)which is dened as entropy 4 .


3 Usually, both
if we plot both

often more convenient to deal with it logarithm

GR and GS GR and GS

are monotonically increasing function of energies of respective systems. Hence, versus energy of system S (i.e.

U ),

then we would see that

GR

would be

monotonically decreasing function. Hence we can expect a maxima in

4 One side remark. Note that the same assumptions were made in the previous paragraph for the entire

GR GS

versus

U.

system R+S. We assumed that closed system R+S had equal probabilities to be in any of degenerate states with energy

U0

and the probability is zero for any other energy state. Now we are claiming that at thermal

equilibrium an open system

also has equal probability to be in any of the degenerate energy states and

has zero probability to be in any other state. We could have simply said that it is assumed that R+S is in thermal equilibrium with its surroundings ? Taking it to the extreme, we can claim that whole universe is in thermal equilibrium ? Crazy !! ... I don't know ... I am not into cosmology stu !!

Mukul Agrawal

7
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3: Thermodynamic State Variables and State Equations

Thermodynamic State Variables and State Equations

3.1 State Variables and Equilibrium


In all branches of science, we claim that state of the system is well-specied when an exhaustive set (may or may not be mutually independent) of measurable physical properties of the system is specied such that any other physically measurable property that can be potentially known

can be directly evaluated using this exhaustive set. Equivalently, if we

take two identical systems and if all properties within this exhaustive set takes exactly same value for both systems then we claim that one can not do any measurement what so ever to dierentiate between two systems . In fact, it would appear that to do any kind of science, we should be able to describe state of the system (irrespective of whether system is in equilibrium or not  we should always be able to specify its state). Such an exhaustive set of physical properties of the system is known as set of state variables because dierent values assigned to these variables dene a dierent state of the system. In contrast there could be some variables associated with the system whose values actually depend upon the history of the states that the system has taken previously. Such variables are non-state variables. Examples are heat extracted from the system, work performed by the system etc.. At rst sight it would seem that these variables are not state variables because of another trivial argument that they are not associated with one state of the system but with two state of the system. But this is not something of fundamental importance. For example, we can take the initial state of the system to be some xed, reference state. Then all these changes can actually be associated (at least it would appear to be so) with only the nal state of the system. But, as we would see latter, this last statement is not correct. We would see that non-state variables actually depend upon the trajectory that the system takes to go from initial state to the nal state.

3.1.1

Exhaustive set of state variables?

If we go by above denition of state variables then it would become obvious that in a classical multiparticle system, for example, one would have to specify the position and velocity of every particle to have a completely exhaustive set being well-specied. Statistical mechanics is a

5 Or any other physically measurable property of interest under somewhat restricted domain of study of
the system. Remember that thermodynamics is a macroscopic subject. Hence it is a restrictive subject. state can have dierent microscopic properties.

6 Again we need to stay within the domain of the subject. Obviously two systems in same macroscopic

Mukul Agrawal

8
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.1

State Variables and Equilibrium

restrictive subject. It assumes that we are only interested in measuring a restrictive set of physical properties. Now, there are two main points that should be stressed here :-

If we are concerned with the question as to which are the dierent microscopic states for which measurement of a certain physical property gives the same result then it would be immediately appreciated that a big subset of microscopically dierent system states potentially can do so.

Second point, which is seldom clearly appreciated, deals with the question of identifying a set of exhaustive measurable physical properties (within our restricted set) so that once these are specied all other physically measurable properties (again, within our restricted set) get xed.

 One can ask a valid question  is it even possible to have such a self consistent restricted set? Let us take an example. Suppose we have a chamber of gas of classical non-interacting point particles with certain xed mass per particle. Properties that we are interested in measuring are all sorts of macroscopic properties like density, heat capacity, pressure, temperature, volume expansion coecient etc. One would appreciate that to calculate such macroscopic properties for any arbitrary microscopic state of the system, even though one does not need position and velocity of every particle, one still would have to specify the entire distri-

f (r, v ) as a function of time to calculate these properties at any instant of time. Here f (r, v ) is the distribution function  i.e. it gives the number of particles in innitesimally small phase space represented by (r, v ). This is still
bution function a lot of information. This is doubly innite set. So we can say that  yes, in principle, it is possible to identify a doubly innite set (which is still a lot smaller than complete microscopic state specication) so that specifying this set would specify all other physically measurable properties .

 Now, let us make it slightly more practical. Can I have more restrictive set in
which I do not have to specify the doubly innite set? Our intention here is to explore if this is even theoretically possible.

 To further clarify the point I am trying to convey, let us look at the reverse
issue. Given the entire set of all sorts of macroscopic properties like density,

7 Please note that rst point is still valid. Explicitly, for a well specied
we are going to measure then we can claim that a unique

f (r, v )

we still have a big set of

microscopic states that can not be distinguished from each other. When restrict our set of properties that

f (r, v )

uniquely identies a state of the system.

Mukul Agrawal

9
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.1

State Variables and Equilibrium

heat capacity, pressure, temperature, volume expansion coecient etc., would we ever be able to identify complete prole of

f (r, v )? To completely specify f (r, v ) we need all moments of f with respect to v at all physical locations. Moments of a distribution function f (r, v ) with respect to v are dened as follows. The nth order moment is dened as fn (r) = v n f (r, v )dv . Hence zeroth order
moment is related to spatial distribution of average particle density, rst order moment is related to spatial distribution of average particle velocity, second order moment is related to the spatial distribution of average kinetic energy etc. If all innite orders are specied then

f (r, v ) gets completely specied.

Usually, we are

not interested in measuring such a complete set.

Suppose I can only measure

zeroth moment, rst moment and second moment. In other words we are saying that our measurements can only measure physical properties that are related to particle density, average speed and average kinetic energy. If I specify only these three properties, would all physically measurable macroscopic properties get xed? Obviously not. I can denitely conjure up a macroscopic property that depends on third moment (cube of the velocity) and obviously that property would not get xed.

 On the other hand, if I restrict the set of states that the system can take then
this might be possible. I say that prole of the

should remain Boltzmann and

should not vary with space, for example. Then I can claim that measuring only three variables denes all other macroscopic measurable properties.

This is what (equilibrium) thermodynamics or equilibrium statistical mechanics tries to do. In this very restricted subject we claim that we can come up with a very small set of state variables, known as an exhaustive set of thermodynamic state variables, that when specied would completely specify the state provided system is known to exist in restricted sub-set of states known as thermal equilibrium state. For problems in which system states may not be in equilibrium states but where we are only interested in subset of physically measurable properties, we can can still work with a limited set of state variables (also see the next section  concept of state variables is perfectly valid even if system is not in equilibrium state). What is this exhaustive set for dierent realistic problems and how this set can be used to mathematically calculate all other physically measurable macroscopic properties is studied in subject of equilibrium thermodynamics ('dynamics' is a misnomer!). To develop such a subject one need to start with a bunch of thermodynamic laws. In equilibrium statistical

Mukul Agrawal

10
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.1

State Variables and Equilibrium

mechanics we start with known microscopic laws and try to prove these thermodynamic laws. Another domain of problems that thermodynamics (or equilibrium statistical mechanics) can solve is that of temporal or spatial varying problems where at every instant and at every location one can assume instantaneous and local equilibrium (see below) .

3.1.2

Are state variables dened only in equilibrium?

This is a bit involved question. Correct answer (or suciently correct answer) depends on context we are planing to use the concept of state variables. First thing we should notice is that in statistical mechanics all state variables are dened through statistics of microscopic states available in the system. Hence, strictly speaking one can always

dene

state variables irrespective of whether system is in thermal equilibrium

or not. To make things absolutely clear, let us restrict ourselves to only

classical systems

to begin with. Let us take an example of ideal classical gas (non interacting distinguishable particles) in a chamber. I can always calculate temperature using its statistical denition of

once any arbitrary U

f (r, v )

is specied for above mentioned classical system. For the

given

f (r, v ) we can calculate the total energy of the system and then one can calculate how many dierent f (r, v )'s can give same energy to the system. If we repeat the same procedure
. So the U

of energies slightly dierent from the given energy, we can easily calculate the

perception that temperature is dened only if system is in equilibrium is not correct. But what is correct is that specifying pressure

and temperature

would no longer x all other

properties. We can think of dierent conguration of the ideal classical gas in a chamber (in non-equilibrium) having same

and

(dened according to strict statistical denitions)

but having completely dierent volume

for example.

So when we are using concept of

state variables in non-equilibrium situations one need to be very careful. Here is another stream of arguments. Again, let us restrict ourselves to

big systems

macroscopically

to begin with. For macroscopic objects, it is usually allowed to analyze a big

system as consisting of many smaller sub-parts . It is obvious that if dierent sub-parts of a system are not in equilibrium with each other then one can not specify a unique state variable to entire system. For example, if a bar of metal is in contact with two heat baths

8 We still need at least a few simple concepts of transport theory even to solve such simple near equilibrium problems. So strictly speaking thermodynamics can not even do this. Check article on irreversible thermodynamics for more details.

9 To make things clear let us explore if same thing can be done in microscopic objects. Think of a 3D

quantum well. Wavefunction spreads over the entire well. So energy of particle is associated with the whole object. We can not say that particles in left side of the well have more energy than particles in the right side of the well.

Mukul Agrawal

11
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.1

State Variables and Equilibrium

at dierent temperatures on two dierent ends then obviously one can not specify a unique temperature for the entire system. This whole setup is not in thermal equilibrium (there

might be a steady state) and hence there is no unique state variable for the whole system. But going by the arguments of the previous paragraph one can still calculate a unique statistical temperature for the whole system?

So what is correct?

It is really true that one can

dene a statistical temperature, if one insists, but this temperature can not really be used for any useful information. For example, when the whole bar is in equilibrium then the

system would swim among all possible degenerate states and if we calculate single particle distribution function among single particle energy states, then we should get either Fermi or Bose distribution. This would not be true when bar is interacting with two dierent thermal reservoirs at two ends. Hence, the denition of statistical temperature can not be used to claim that the particles inside the bar are Fermi-distributed. This can only be done if the entire bar is in thermal equilibrium. In above example, instead of calculating a state variable for the entire bar, intuitively it might be more useful to try to dene local state variables. Now, the relevant question is  even if there is no thermal equilibrium for the whole system, can we dene

local state

variables in thermodynamically consistent manner? This can be done at least for classical
systems as discussed below.

3.1.3

Concept of local equilibrium

A subset of non-equilibrium problems are those in which there exist local equilibrium at every physical location (this is a valid theoretical possibility at least in

classical systems ).

When

we think from statistical mechanics point of view, we would realize that such a situation would exist when systems are in near equilibrium situation (see the article on transport theory for details) rather than in real non-equilibrium situation. The reason that a number of common problems do fall under this category is because, depending upon the strength of macroscopic spatial perturbations that drives the system out of equilibrium, the time rate of ow of various average physical properties across any cross-section is usually much slower than the temporal relaxation of particles (time they take to swim into set of most probable energy degenerate states).

Mukul Agrawal

12
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.1

State Variables and Equilibrium

3.1.4

Concept of instantaneous equilibrium - reversible processes

Another subset of non-equilibrium problems are those in which there exist instantaneous equilibrium at every instant (this is a valid theoretical possibility at least in

tems ).

Such processes are dened as

reversible process .

classical sys-

All practical processes are ir-

reversible.

Reversibility is just an approximation.

Again, when we think from statistical

mechanics point of view, we would realize that such a situation would exist when systems are in near equilibrium situation (see the article on transport theory for details) rather than in real non-equilibrium situation. The reason that a number of common problems do fall under this category is because, usually, temporal relaxation of particles (time they take to swim into set of most probable energy degenerate states) is much faster than the macroscopic perturbations that drives the system out of equilibrium. A note of caution. Sometimes we would see analysis that tries to link instantaneous

This type of analysis does not necessarily assumes existence of local equilibrium . That
and local state variables at dierent locations and dierent instants of time. assumption might be invoked only when we take these local state variables and try to relate it to some other local state variables at same location and same instant of time.

3.1.5

Local state variables and local equilibrium in quantum systems?

Now the concept of such local equilibrium or even local state variables become a thorn as we start looking into completely quantum systems. Lets take an example. In statistical

mechanics temperature, for example, is simply related to how degeneracy changes with energy. Suppose we have a chamber of gas. This whole chamber can be treated as one So we can calculate

quantum mechanical system (a 3D innite-potential quantum well).

the energy of the system corresponding to any particular microscopic state and we can also calculate entropy. One would then argue that for every microscopic state one can associate a temperature. This temperature is associated with the entire system and not with any

physical location. If our chamber of gas is interacting with a reservoir at temperature on one face and with another reservoir of temperature

T1

T2

on opposite face then how do we

explain the commonly known phenomenon of temperature gradient inside the chamber? We should still be able to associate a single temperature with entire system? one can even dene temperature of even a single particle. It seems that

Similarly, one can also dene

temperature of a microscopic quantum state of a macroscopic system (the state itself might be evolving in time and so might be the temperature).

So it seems that thermodynamic

Mukul Agrawal

13
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.2

Pair of Extensive/Intensive Properties

state variables can be dened for a microscopic quantum system in a microscopic quantum state.
Now, if we think about paradox, we would actually be driven into much deeper subject of quantum non-equilibrium statistical mechanics, which I want to try to avoid in this article (check article on statistical QFT, if interested). Interaction with macroscopic reservoirs

destroys quantum coherence (this like making a measurement) and hence one can no more treat the whole system as one quantum system. If we want to do the complete quantum

analysis then one has to include reservoir as part of the system being analyzed quantum mechanically.

3.2 Pair of Extensive/Intensive Properties


Extensive properties are those which are directly proportional to the total amount of
matter present. With each extensive property one can dene an that

intensive property such

(int)d(ext)

has units of energy. If two systems are in equilibrium, on joining them to-

gether the intensive properties of the new combined system would be same whereas extensive properties would be the sum of the two. In this sense intensive properties are independent of the amount of matter in system. One can dene as many 'new' extensive and intensive properties as one want. But we do not need all of them. Only a few of them would be independent. If system has

degrees of

macroscopic freedom then one can completely specify the macroscopic state of the system using

n independent extensive state variables or n 1 independent intensive variables and one extensive variable (see the last line in this section) or mixture of two types of n independent variables or using more that n dependent state variables (The statement is obviously trivial,
conventional to distinguish between two types

only important point to note here is that its of variables).

Some common extensive state variables used are :charge on the particles.) Corresponding intensive is the electric potential). If system has

, V, N, q, etc. (Where q is the total properties are , p, , ,etc. (Where ,

degrees of freedom, we would identify and dene

independent extensive variables.

After that any thermodynamic state variable can be Conventionally, the energy of

written as a function those the system called

n independent extensive variables.

is written as a function of all extensive variables

U = U (, V, N, q, etc).

This is

fundamental relation (described below, go on reading). The corresponding intensive

properties are then dened, for example, by considering the function

U = U (, V, N, q, etc)

Mukul Agrawal

14
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.2

Pair of Extensive/Intensive Properties

and expanding

dU 10

dU = (

U U U U )V,N,q d + ( ),N,q dV + ( ),V,q dN + ( ),V,N dq + etc V N q

The coecients of all the dierentials would be dened as corresponding intensive property. For example coecient of property called

in the expansion of

dU

is

( U )

and is dened as an intensive

temperature

corresponding to an intensive property

called entropy.

(We will discuss this in details below, go on reading). Note that its obviously not necessary that

be a function of

all

possible extensive

properties that one might wish to conjure up. If such situation happens then we can say that the coecient of the dierential (that is intensive property) of that particular extensive

dU is zero (corresponding to variable that has been removed from the list). For example if U does not depend on charge q then we say that intensive U ) is zero. On the other hand it is also not necessary variable  electric potential = ( q ,V,N
variable in the expansion of that all of these variables might be independent and, hence, they might not be necessarily kept into the of

U 's

variable list. Whenever number of variables in the functional dependence

is more than the degrees of freedom of the system one can, if one wishes, remove a few

of these variables. But still redundant list of variable do not cause any harm in analyzing equilibrium relations. Although one should be a bit careful. It creates lots of confusion.

There are a few points one should remain careful about :-

One should note that mathematically there is no problem in expanding a continuous and smooth (up to rst derivative at least) function of multiple variables as written above irrespective of whether variables are dependent or not. smooth function Imagine a continuous

of two state variables

and

y. f

would appear as a surface in a 3D

diagram. Now if there is a constraint relationship between the state variables

and

then what this means that system can only follow a certain trajectory within the

surface of

as system moves from one state to state. One can convince oneself that if

system moves from a state correct change

(x1 , y1 = g (x1 ))

to another state

(x2 , y2 = g (x2 ))

then the

f2 f1 can be evaluated mathematically by adding the changes that happens from (x1 , y1 ) to (x2 , y1 ) and (x2 , y1 ) to (x2 , y2 ). This is true as long as x and y are state variables (and hence f is also a state variables) and we use an exhaustive
set of state variables (may not be independent).

10 check the third law foot note as well

Mukul Agrawal

15
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.2

Pair of Extensive/Intensive Properties

Firstly for obtaining the denitions of intensive properties (as described above), its recommended that redundant list should be kept intact. But during these denitions one should not try to 'physically visualize' the expansion of the total charge

dU .

For example, usually

and number of particles

are not independent variables (q

= N e).

Still we can mathematically expand

dU

in these two variables.

While we take the

partial derivatives, we mathematically keep the other variables constant  ctitiously assuming that each variable can be independently held constant irrespective of other variables. For example electric potential is still given by that we 'mathematically' see what will happen to

) . = ( U q N

What it means is

if innitesimal amount of charge is

added without adding new particles. Similarly intrinsic chemical potential would still be dened as

U )q . = ( N

Again we 'mathematically' increase the number of ctitious

uncharged particles by innitesimal amount and observe the change in

U.

Secondly, one should keep this redundancy in mind while analyzing the equilibrium relations (keep on reading, these would be discussed in details latter). For example

consider two systems in which charge and particle count are independent variables. For example, a ctitious system in which charge is something which do not have any mass and, hence, no KE but they have electric PE in electric eld. Whereas massive particles do not have any charge and hence no electric PE and they have only KE. If such two systems are in thermal equilibrium with a common heat bath with which they can only exchange energy and nothing else then we will show below that in equilibrium and

1 = 2

1 = 2

separately

11

, where we have used the following denitions of chemical

potential and electric potential :-

U ),V,q = ( N

and

e = ( U ) q ,V,N

and we have taken

U = U (, V, N, q ).
to expand

Now if charge and number of particles are not independent, as is

the case with conventional systems, then although one is still allowed mathematically

dU

in dependent variables and one might choose to keep on sticking to

exactly same denitions of chemical potential and electric potential but one cannot conclude the above equivalence of (intrinsic) chemical potential and electric potential

separately. One can show by eliminating one of the dependent variable that actually
in equilibrium

1 + e1 = 2 + e2 . Some people take a dierent view situation. They write U in terms of only independent variable and V U U 'redene' chemical potential as = ( ) = ( N ),V,q + e. This keeps N ,V
dF = 0
and

of such a and then the above

statement of equivalence of chemical potential in equilibrium unchanged. This is the

11 Because

and

are independent. This would be explained below, keep on reading!

Mukul Agrawal

16
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.3

Fundamental Relation

convention that is used in semiconductor theory. So basically while reading the interrelationship between dierent variables in thermal equilibrium one should ask oneself  how are these variables dened. Because various books follows various convention  some keep redundant variables some don't.

For a system of degree of freedom of extensive properties, corresponding Only

n,

being written in terms of

independent

n intensive properties would not be all independent.

n1

would be independent and one extensive property is always required to x

the 'size' of the system. (See Euler+'s Equation below)

3.3 Fundamental Relation


U = U (, V, N, qetc)
The functional dependence of total internal energy

for any thermodynamic system of

degrees of macroscopic freedom on complete set of extensive variables (independent or It is dierent for dierent Now if

dependent) is called the fundamental relation for that system. systems.

We can dened a huge set of generic macroscopic extensive properties.

system has

n degree of macroscopic freedom then only n of these would remain independent


The basic conclusion is that

and all others can be represented as combinations of these.

its possible to disturb the internal energy just by changing number of particles and not changing the entropy or volume or electric charge of the body. Similarly one can change

the internal energy just by changing the entropy or by just changing the volume or by just changing the charge on some of the particles. All this is possible independent of each other (of course if variables are independent). extensive properties. Entropy, volume, charge, particles etc are called All of these extensive properties, in

There might be many more.

most unconstrained case, can be changed independently to disturb the total energy. And each of these disturbances denes a corresponding property called intensive properties. So basically, one can write a functional dependence (which would be dierent for dierent system), which we would write symbolically as

U = U (, V, N, q = charge, etc).

This is called

the fundamental relation. (This is true in general. But you might put some 'constraints' on your system that might make a few of these variables dependent on others. So you

might want remove the dependent variables from your function-parameter-list and just keep independent ones.)

Mukul Agrawal

17
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.4

Thermodynamic Identity

3.4 Thermodynamic Identity


dU = ( U U U U )V,N,q d + ( ),N,q dV + ( ),V,q dN + ( ),V,N dq + etc V N q
dened to be temperature, pressure, intrinsic chemi-

The properties in braces are

cal potential, electric potential etc. These are the strict statistical denitions of these
terms. Some of them are already dened in mechanics (like pressure) or classical thermodynamic sense (like temperature). It can be shown that these denitions are 'equivalent' in some sense to those older ones. So these terms pressure etc still imply the same intuitive meaning. Where there is risk of confusion these should be taken as the strict denitions.

We would give detail intuitive feeling behind the meaning of uncommon terms like chemical potential below. Hence one can write above equation as:

dU = d pdV + dN + dq + etc
Note that actually there is a dierent

for each type of particle if

dierent kind of

particles (say He atoms and H atoms) are there in a system.

We would then replace This is just

dN

by

i dNi .

This equation is referred as the thermodynamic identity.

a combination of denitions. That's all. We can rearrange the thermodynamic identity to obtain alternative denitions:-

d = dU + pdV dN dq + etc
And considering

= (U, V, N, q, etc)

gives us

alternative denitions :-

1/ = (

)V,N,q U )U,N,q V

p/ = (

etc. This is how one switches between dierent denitions of same quantity written in terms of dierent variables.

Mukul Agrawal

18
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

3.5

Euler's Equation

3.5 Euler's Equation


Considering innitesimal change in the amount of mater, one can show that :-

U = + pV + N + q + etc
This is called Euler's Equation. Hence, one can easily obtain :-

d + V dp + N d + qd + etc = 0
This is called Gibbs Duhem relation. This tells us that

intensive properties cannot be all

independent. Only n-1 can be independent. One extensive property is always needed to x the size of the system.

3.6 Various Denitions of Intensive Properties


One can obtain one set of denitions from the thermodynamic identity and noting that

U = U (, V, N, q, etc).

So

identity and considering

U U = U , p = , = etc. One V N 1 = (U, V, N, q, etc) and get = dU , d

can ip the thermodynamic

dV , d

dN etc. Note d

that its usually very dicult to enforce constant entropy or constant energy in reality. So in practical systems its much better to use hybrid quantities. For example Helmholtz Free Energy

F = F (, V, N, q, etc) = U .

Heat
'transfer'. It is

First of all, note that the word 'heat' is used only in the context of energy a type of energy transfer and not a type of energy

12

! But why is the heat transfer separately

dened and studied from other forms of energy transfer? We can answer this if we try to explore how much energy can be converted into mechanical work. Let us rst give a chanics. If energy

quantitative denition of mechanical work

in statistical me-

dU is transferred from system S1 to system S2 in such a way that the entropy of system S1 remains constant then we say that system S1 has performed mechanical work of magnitude W = dU on the system S2 . Let us consider a few realistic examples.
12 It is fairly common to talk about 'heat generation' and 'heat dissipation' . What is really meant in this

context is the entropy generation and maybe consequent increase in the temperature. Internal energy of the system is increased. Strictly speaking, one should avoid phrases like 'heat generation'.

Mukul Agrawal

19
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

4: Heat

Suppose we have an adiabatic chamber (no energy transfer is allowed between chamber and environment) lled with some liquid. There is a stirrer in the chamber that can be rotated by applying external force. Entropy of this external agent that applies force is constant.

Hence energy transferred by the external agent to the system is pure mechanical work. This energy would lead to increase in temperature and increase in the entropy of the system. Please note that entropy has not own-into the system from outside. Rather entropy got generated

13

. There is no heat transfer and no entropy transfer in this process. Entropy is

not conserved and gets created in system


other extensive properties of the system

S2 . Similar examples can be created by changing S2 using external agents of constant entropy. For

example one can change pressure of an adiabatic gas chamber and hence the internal energy of the gas by using an external constant entropy piston. If we have an adiabatic chamber of charged particles, we can increase its energy by changing external background electric eld which also has constant entropy. When energy is transferred from one system to other, two characteristically dierent processes might be happening. In the rst process the chaos or randomness (entropy) of the source system might also change as the energy ows from the source system to the receiving system. In the second type of process the entropy of the source system does not change. The thermodynamic study of the two processes is substantially dierent as entropy is a concept of central importance in thermodynamics.

The rst process with changing source Whereas second

entropy is 'dened' as heat transfer and work done process.

process with constant source entropy is 'dened' as only work-done process.


Both are energy transfer processes. Some authors call the constant entropy source system involved in the second process as a

'purely mechanical' system. Quantitative denition

of heat is given below. Quantitative denition of heat would try to answer following question  if entropy of source

does change then what percentage of energy transfer is heat and what

percentage is mechanical work ? This is an important question and, for irreversible process, can be answered in any trivial way. Before we go ahead and provide quantitative denition of heat transfer, a few points are worth stressing :

It is important to note that, although, many authors use terms like 'extracting entropy' from the source system and 'depositing entropy' into the receiving system,

entropy is

NOT a conserved quantity (except in a reversible process) so that entropy changes


in the source and in the destination might be dierent. Its usually bad idea to consider

13 Some people may call it heat generation process.

Mukul Agrawal

20
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

4.1

Heat Transfer in Reversible Process

entropy as something 'owing' in and out. Some authors vaguely say that when entropy ows with energy the process is heat ow. I think it is a bad statement.

Consider two systems

S1 and S2 .

Even though, reduction in entropy of system

S1 might

be smaller than the increase in the entropy of system

S2 ,

we can still dene a ux


14

of entropy density vector across any cross-section. Consider two cross-sectional


planes, innitesimally far apart. Entropy ux on both planes is well dened . Dier-

ence between the two uxes gives the rate of generation of the entropy in the medium in between. Denition of entropy ux density follows from the Euler's equation:

U = + N + pV + q + etc
Which can be inverted to obtain :-

p 1 U N V q + etc
entropy ux density vector :

Using this we can dene what we mean by

j =

1 p jU jN jV jq + etc

4.1 Heat Transfer in Reversible Process

Thermodynamics mostly deals with equilibrium and reversible process (extremely slow process that can be thought of a sequence of thermal equilibrium states) and in such a circumstances the entropy of the world would remain conserved and, hence, one can talk about the ow of entropy .  ow For

reversible processes heat is dened as the

of chaos. Quantitatively heat ux density vector (jq  not to be confused with

electric charge current also written as

jq ) is dened as jq = j

where

is the entropy

ow density vector across any given cross-sectional plane perpendicular to the entropy ow density vector

15

14 This is true as long as we can dene local statistical variables. We do not necessarily have to have
local equilibrium. As long as we can divide system into smaller parts and talk about local entropy density, local energy density etc. and can calculate local statistical temperature correct.

everything following is rigorously

15 Sometimes

jq

in a reversible process is also written as

d dt

which might be confusing because usually

symbol

is used for volume density of entropy. If

jq

is written as

d dt

then one should understand that

represents the entropy owing across a unit area in perpendicular direction in

dt

time. Here

dt

is much

Mukul Agrawal

21
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

4.1

Heat Transfer in Reversible Process

Also note that exact amount of entropy 'extracted' or 'deposited' depends only on the state changes (its a state variable). Whereas

both amount of work and/or heat accomchange

panied with the state change depends also on the path taken even though the

in the sum of the two depends only on the initial and nal states. For example, to
integrate

one would need to know

( )

and that depends on exact reversible path

taken by the system

16

. So there is no straight forward generic mathematical relation

that can tell you how much energy transfer is heat and how much energy transfer is work-done for a given state change unless you know the detailed state path your reversible process is following which one can analyze once one knows the problem parameters.

But what you can easily conclude is this - if the entropy of the source

system has changed then a portion of energy transfer is surely heat. Hence, one thing is sure, if, for example, entropy of some system A (like a cricket ball) is constant then heat exchange between A and B (like a rough surface) is not possible. Although A can do work on B to increase B's internal energy as well as entropy and decreasing its own energy.

If you know the reversible process path you can exactly calculate things. process is reversible, heat transfer

Since the

= d and hence amount work done is = dU d =

P dV + dN + qd + etc.

Lets take an example. If there is a 'particle transporting engine'

operating reversibly between two chamber of constant volume of uncharged particles with dierent chemical potentials then the work done would be

(1 2 )dN .

What

reversibility means is that engine transports one particle then wait for long enough for both the chambers to get to the new thermal equilibrium with themselves. If the entropies are also changing then what that means is that the 'particle engine' is also transporting heat. In general work done would be

(P1 P2 )dV + etc and heat transfer would be

(1 2 )dN + (1 2 )dq + (1 2 )d . Here all integrals are over

path trajectories  so for calculating rst term you should know chemical potential as a function of number of number of particles after each small steps (that is after time

dt)

the reversible process proceeds.

bigger than all the microscopic timescales involved to keep the system in thermal equilibrium while it is much smaller than the total time for which macroscopic ow process is being measured or observed.

16 It is possible to have multiple reversible paths connecting two xed states. Remember Carnot cycles?

Mukul Agrawal

22
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

4.2

Heat Transfer in Irreversible Process

4.2 Heat Transfer in Irreversible Process

This topic is briey discussed here. For more details check another article on irreversible thermodynamics http://www.stanford.edu/~mukul/tutorials/Irreversible.pdf ).

The above denition of heat transfer makes intuitive sense for reversible processes because ow of entropy makes sense. Irreversible processes are very dicult to study in correct quantum mechanical sense. As discussed above, at least in classical sys-

tems, one can dene local and instantaneous state variables (we do not really need to invoke the assumption of local and instantaneous equilibrium). The same denition of heat transfer has been extended even in steady state irreversible processes where entropy is not conserved.

Flux of the entropy ow density vector across any

surface times the local, instantaneous temperature is dened as the heat ux across that surface. If you take a closed surface the rate of decrease of the entropy inside would be less than the entropy owing out per unit time because entropy is being generated in irreversible processes. But still the ow of entropy times the local instantaneous temperature is dened as heat ux (see article on irreversible thermodynamics for more details http://www.stanford.edu/~mukul/tutorials/Irreversible.pdf ). Now for a

steady state system what we can claim is that the heat ow should be

steady. So if

is dierent on the two sides of an innitesimally small system and if

heat ow is steady then ux of entropy ow density vector must be dierent on two sides. Does this mean that there must be some entropy pile up inside the system? If that is true, is this still steady state? Following equation would answer the question. Note that the ow of entropy is not necessarily continuous. Precisely,

d + .j = 0 dt
And, hence, one can not make a statement that unequal ux of entropy ow density vectors on two side mean entropy pile-up inside the volume. If on right hand side

entropy ux is more than that on left side, it might be because there is a source of entropy sitting in between. The entropy density every where still takes a steady state value. One can obtain a continuity equation including sources and sinks for entropy using the well established continuity equations for

U , N , q etc and the Euler's equation.

Mukul Agrawal

23
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

5: Thermal Equilibrium

If one does so one gets

d 1 p + .j = +jU .( ) jN .( ) jV .( ) jq .( ) + etc dt
In this equation all the properties are per unit volume. For a steady state irreversible process the volume density of entropy do not change with time for any innitesimal volume (Obviously entropy density of something - like energy source or something must be changing with time. But we assume that the entropy density of our innitesimal volume doesn't change with time as it is in steady state.). So one can calculate the heat ow.

d dt

= 0.

And with that

In the above particle engine example considered, if the engine works in a non-reversible fashion then how much energy is exchanged as work and how much energy is exchanged as heat ? This is a rather dicult question, in general. Thermodynamics in general does not answer such questions. But in special cases of steady state processes one

can still dene a 'thermodynamics

of irreversible processes'. For

dN

and

dq

type of energy transfer in irreversible process one needs to use the above denition of heat to calculate the heat transfer and then subtract from the total energy transfer to obtain the work done. (For

P dV

work, one can calculate non-reversible work from

mechanics and then subtract from total energy exchanged to calculate the amount of heat exchanged. Alternatively one can calculate the heat transfer from above denition of heat in irreversible processes. They are obviously equal.)

Thermal Equilibrium

After 'suciently' long time all the macroscopic properties of the system stops changing and the rates of all macroscopic forward and reverse processes become equal. state a thermodynamic state. There are number of confusing terms oating around - like - 'statistical equilibrium', 'thermodynamic equilibrium', 'thermal equilibrium', 'mechanical equilibrium We call such a

17

', 'diusive

17 Strictly speaking there should be nothing like mechanical equilibrium for inherently statistical systems
like a chamber of gases. For example when we say pressure from the side of the gas should be equal to the pressure from mechanical load ('mechanical' term implies constant entropy system) its an vague statement because pressure is dened in mechanics for non-statistical systems. We cannot vaguely assume that equating some kind of 'average' pressure to the external mechanical pressure would bring the piston to rest, at least in macroscopic sense. But we can come up with a quantity using statistical analysis which when equated to

Mukul Agrawal

24
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6: Intuitive Understanding of Some of the Intensive Properties

equilibrium', 'electric equilibrium' etc. They might not all are 'reached' simultaneously and statistical equilibrium is bigger word which can be used when all of these smaller equilibria are reached. Statistical, thermodynamic and thermal equilibrium are synonymous. This

is because temperature equality means equality has been achieved with respect to energy transfer and we know that all other processes like particle transfer, charge transfer, volume transfer etc all involve energy transfer as well. Hence thermal equilibrium can only be

achieved when all of these smaller equilibria have already been achieved.

Intuitive Understanding of Some of the Intensive Properties

The denitions of intensive properties can be understood in a number of ways.

The rst

is to accept that internal energy can be written as a function of a few extensive properties (fundamental relation). And then to obtain the mathematical denitions. We have already done this above. A second way, which also gives us their signicance in the establishment of thermal/diusive equilibrium etc, is to consider two system interconnected to each other in such a way that they can only exchange one of the extensive properties. constant. All other are assumed to remain

Then one can use the second law - in the form of entropy, Helmholtz energy,

Gibbs energy or whatever is most suitable to obtain a condition for the equilibrium. That condition can be used to dene an intensive property that should remain same between the systems. When all of these extensive properties are allowed to change then each of them

should balance each other for equilibrium. The denitions are not unique - for example they can all be arbitrary by a constant factor etc. All these issues are xed by making these

denitions same as conventional denitions. This is done in all common text books. So I am not repeating it here. A third way which helps developing an intuitive feel is to physically understand what these properties actually mean  probably in case of idealized situations. done below.
external pressure would bring the piston to statistical (macroscopic) rest. We call this quantity as internal pressure and pretend as if we are still dealing with situations similar to mechanical equilibrium. So we can 'formally' study mechanical equilibrium as a part of statistical or thermal equilibrium

This is what is

Mukul Agrawal

25
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6.1

Temperature

6.1 Temperature
Best understanding of temperature is its mathematical denition .... take out some energy keeping all the extensive properties as constant .... check what's the change in entropy. One can consider the equilibrium between two systems which together form a closed system and which can exchange energies to alter each other's properties are kept constant. total combined system tells us

and

but other extensive

Considering the maximization of the total entropy of the

= (U, V, N, q, etc), = 0,

which implies

)V,N,q,etc = ( U

( U )V,N,q,etc .
In simpler systems like a chamber of ideal gas, much more physically intuitive meaning can be assigned to 'temperature'  its a direct measurement of the internal energy of the system for xed total number of particles, that is a measurement of how chaotic the particles are on average. This is possible because in ideal gas system

is only a function of

and

N.

6.2 Pressure
In general thermodynamics deals with those systems in which if I take out some energy from R and give it to S the entropy of R is going to decrease whereas the entropy of S will increase. So basically we can look for a maxima. But there are situations when this does not happen. For example when the entropy of outside world is constant . For example an electrically charged piston being pushed by constant electrical force or any other load but not atmospheric (consider for example adiabatic or isothermal compression). Such system are called

'purely' mechanical systems.

Its better to take the external agent to be the 'mechanical system' and not statistical system. For example we can consider a piston being pushed by some load instead of considering gas on both sides. This helps us in relating the conventional mechanical denition of pressure to the statistical denition of pressure.Whereas considering gases on both sides can provide us a dierent type of insight which tells us that for volume-exchange-equilibrium a certain quantity, which we would dene to be pressure, should be equal. I am doing both of these below. Conventional understanding of pressure of a statistical system is good enough for intuitive understanding. In classical kinetic theory its dened as the momentum transferred to the walls per unit time because of the randomly striking particles. Generalizing this into QM settings we can consider all possible energy states and calculate the innitesimal virtual work in compressing the state by innitesimal distance. Taking the ensemble average would give

Mukul Agrawal

26
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6.2

Pressure

the measure of average work need to be done and that gives the measure of internal pressure. I am doing this in details below. One denition of pressure for an 'inherently statistical' system can be obtained by considering the mechanical equilibrium. For example, consider an ideal gas system which is right now in some equilibrium with a piston having some load. Note that the load-piston system is an isentropic system. For such an 'inherently mechanical' system such as load-piston system we already have a denition of pressure. Obviously from mechanical equilibrium considerations of the piston we can say that the internal pressure (which we still need to dene in terms of system properties themselves) has to be equal to the external pressure. In this way we can associate an internal pressure with this 'inherently statistical' gas system. Isolate the system from any kind of heat bath. That is make it adiabatic. Also keep all other extensive properties like

N, q

etc to be same. Increase the pressure by an innitesimal amount which

would compress the system by an innitesimal amount. This process would be an reversible adiabatic processes and hence an isentropic process. In fact we can argue that each particle stays in its same QM state and energy of each state get shifted because of the compression. So number of QM states and hence entropy remains same. Whereas the work done by the external force would be

pint dV .

Calculating the ensemble average of the change in each QM

state we can conclude that change in the internal energy is tells us that both of these have to be same. So we can

dU ),N,q,etc dV . Now rst law ( dV dU dene pint = ( ) .This equadV ,N,q,etc

tion tells us that from mechanical equilibrium considerations of the piston, we can associate an internal pressure with the given system which is calculated from the properties of the systems as above. There is another view of the denition. Look at the above denition of pressure. There is

= const in a macroscopic system. One can do the rearrangements of fundamental relation but still then its dicult to force U = const. So we start thinking from
no good way of forcing other 'hybrid' extensive properties like enthalpy, Gibbs energy etc dened above. They can be used for eliminating one of the extensive property with its corresponding intensive property in the fundamental relation. This helps because controlling

is easier than controlling

In below, I am going to give an intuitive understanding based on Helmholtz free energy

F (, V, N, q, etc) = U .

We know

dU = q + W = d pdV + dN + qd + etc.

Consider two chamber of gases with a movable piston in between them. Both of them are in contact with same heat bath. The total S consisting of two parts cannot do any work. One can show that Helmholtz free energy would be minimum only when pressure, as dened above, of two sides are same.

Mukul Agrawal

27
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6.3

Chemical Potential

6.3 Chemical Potential

The above is a typical entropy vs energy plot for dierent values of

N.

Note that

1/( U )N,V,etc ,

hence, for same number of particles, as we increase the temperature both

entropy and internal energy increases.

18

Now draw a straight horizontal line across the plot

to represent constant entropy. This tells us that if we want to keep the value of the entropy same as before, we can add a few more particles with a specic amount of energy. The above plot gives a very good intuitive picture that this is possible. energy without changing the entropy. Hence energy transfer is this is work done. An interesting case is when internal energy and number of particles are not independent variables. For example when That is we can change the

U/N

needed would be called chemical potential.

dN .

But we know that entropy change is zero and hence all of

is only an explicit function of either U

or

but not both. For

example consider a chain of 'xed' charged particles (maybe distinguishable  say particles with dierent masses) under the background of constant electric eld. In such a case exchange of energy and exchange of particles from reservoir is one and same process. Hence we can

18 The

plot although depends on the material properties, for most of the realistic materials it

happens to be of above form with a positive slope for all energies. Hence, no negative temperatures and monotonic increase in energy as well as entropy. Negative temperatures do exist and I won't go into those details! Check the appendix of Kroemer and reference of Ramsey's paper therein if you are interested. It can be considered as one law of thermodynamics).

additional basic postulate

of statistical mechanics that entropy is single-

valued, monotonic, continuous, dierentiable and zero-starting-with-innite-slope function of energy (third From this postulate one can easily conclude that one can invert a given

(U, V, N, q, etc)

and obtain

U = U (, V, N, q, etc)

and vice-versa, a fact that we tacitly assumed above in

the article while writing about the fundamental relation and denition of intensive properties.

Mukul Agrawal

28
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6.4

Fermi Level

either consider

= (U )

and hence dene

=0

or we can consider

= (N )

and hence

= .

In rst case we treat as an energy exchange process and in second case we consider

it as a particle exchange process. The reservoir will come in equilibrium with the system when, in rst picture temperatures are same or, in second picture, when chemical potentials are same. Similar situation occurs in practice with photons (read below). We can even consider a practical example. Consider a chamber of xed volume of classical ideal gas in thermal equilibrium with a heat bath at temperature add

T.

Now suppose we want to

dN

more particles into the system in such a way that the entropy of the system remains

=N {ln(N/V nQ ) + 5/2}.Where, nQ = (M /2 2 ). And the 3 total energy of the system is given byU = N . So one can consider = (U, V, N ). So one 2 can easily nd a particular U1 such that dU = (dN )U1 so that d = 0. Such an energy level U1 is called the intrinsic chemical potential. This energy level itself might be degenerate or
same. Entropy of ideal gas is it might not even be allowed, just like in the forbidden gap. Similar to our previous thought experiments, one can consider the free energy minimization for diusive equilibrium to obtain anther intuitive feel of chemical potential.

6.4 Fermi Level


Uredundant = Uredundant (, V, N, q, etc) and hence dUredundant = d pdV + dN + dq + etc. Usually in semiconductors, charge q and number of particles N are dependent variables. q = N e. We can use this 'constraint equation' to eliminate one of the extensive state variables, say, the charge q . Uredundant = Uredundant (, V, N, q, etc) and U ),V = + e. And not just . dU = d pdV + ( + e)dN + etc. Note that now ( N
So if we write internal energy as a function of fewer variable (all independent) then the chemical potential has to be taken as a function of electric potential. In the following discussion we would relate the probability of occupancy of any single particle orbital with the chemical potential. So we would expect that occupancy of any particular

single-particle occupancy would also depend on the electric potential. In solid state literature

is referred as

int

and

is referred as

ext

whereas their sum is called the

actual chemical potential and is denoted by

Where as in device physics literature,

people do not go on into such details and simply call the nal sum of the two terms as Fermi level

Ef . int dN +

When particles are charged particles then actually what matters to us is :-

edN = dU d = dF .
Mukul Agrawal

So if two isothermal systems are such that total work done

29
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6.4

Fermi Level

by combined system is zero then

Ef = kinetic + electric = int + e

would be same

on both sides. ((One side remark. Note that on similar lines, one can also distinguish between

rotation

and

linear .

One most probable confusing-trap now is the confusion Not that we are still discussing

between the energy transfer and particle transfer.

'only' diusive equilibrium! Read the next item below! The best way to counter
this is to consider the system as consisting on discrete energy states and the energy of each state state is built of potential+rotation+potential (one level might obviously be much more degenerate because of multiple possible combination). And then proceed in usual way of thinking of particle transfer. Note that inclusion of 'more energy' terms is changing degeneracy and hence 'entropy'. dynamics changes. It has Fermi level. And that's why diusive equilibrium

nothing to do with energy transfer.)). This

Ef

is called

Note that we haven't discussed what are the

ways of particle transportation or


19

what are their driving forces. All that we are saying here is that if at all system would ever reach a state of xed 'macroscopic' properties then

( + e)

has to be a at

level. We can then use kinetic theories to come up with expressions for various ways of carrier transport and we can say that the sum of all these has to be zero in equilibrium. Usually in semiconductor literature carrier transport is divided into two parts drift and diusion (this may not be strictly true in complicated situations). Probably the term diusive equilibrium' was originally meant for the balance of diusion current. it strictly means balance of total particle current. then that means But

So if diusive equilibrium exists

Ef 1 = Ef 2

and that means then

drift+diusion =0 and they

are not necessarily zero separately.

There are lot of confusions involved here

in understanding this. People usually associate diusion current is something that is because of entropy maximization and drift because of mechanical forces. But that's not true. Both currents are actually due to entropy maximization at least in the case of thermal equilibrium. It so happens that current ow can be divided into two parts one of which 'solely' depends on 'local' density and electric eld and other 'solely' depends on diusion constant and gradient of carrier density. So we get a 'feeling' that there do exist some physical 'driving' force for drift current whereas there is none for

19 Actually we mean state of thermal equilibrium here.

Even in case of steady state condition we do

have xed macroscopic situation but this is not thermal equilibrium state. More strict denition of thermal equilibrium is the equality of all kinds of forward and reverse processes (known as pronciple of detailed balance).

Mukul Agrawal

30
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

6.4

Fermi Level

diusion current. So we usually conclude that diusion is because of random motion and entropy maximization. But remember that even in the case of solely drift current, for example current through resistance, we could have a charged carrier with lot of KE moving in the opposite direction of the applied electric eld. Why doesn't this takes place ? It in fact it does happen but the probability of such an event is smaller than the forward current. Forward current increase the entropy of the world. Hence that is the preferred direction. But in this particular case it never reaches maximum and hence an equilibrium situation never comes. Best way to look at the aairs is that diusion current is because of the change in intrinsic chemical potential and drift current is due to the change in potential. Both are preferred in one particular direction, although

other is also possible, because of entropy maximization. Note that intrinsic chemical potential can easily be related to the carrier density, for example, for non interacting gas in classical regime

exp(kinetic / ) = n/nQ .

Note that in the case of current through

resistance usually eld is so small that change in carrier density and hence intrinsic chemical potential is negligible and so is the diusion current. But in case of current in semiconductors, where diusion current can be considerable, carrier transport in both directions might increase entropy but because of dierent favoring factor. Usually one would be more. But in equilibrium both would equate each other. One good deceptive example is when you have a tower containing massive particles. There is a drift current towards bottom and there is diusion current going up. In equilibrium both balances each other. But this is the easily be solved in

kinetic way of looking at the problem. This problem can

thermodynamic way without bothering about the various ways of

transportation. (If you compare the two approaches, you would get a relation similar to the one in device physics called 'Einstien's Relation' between diusion constant and drift constant (mobility)). This example is same as one of innite semiconductor with constant horizontal electric eld.

When you try to solve the above motioned 'tower's problem' in thermodynamic way, you would realize that there are two alternative ways either considering the

diusive

equilibrium between dierent height sections and other by considering the thermal equilibrium between dierent height section. The two becomes identical because in
this particular case energy exchange and the particle exchange are one and the same process. This might not be true for a more realistic semiconductor problem. For this point of view, a real semiconductor problem is quite dierent from a chamber of free electron gas. Between two sections of a semiconductor you might either of phonon

Mukul Agrawal

31
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

transfer or electron transfer. temperature gradient.

It might be that a given semiconductor is maintained

In such a case conditions arrived at for achieving diusive

equilibrium of electrons do not necessarily guarantee the thermal equilibrium. (If only other way of energy transfer is phonon exchange then the equality of Boson electron

Ef

and

Ef

would also imply thermal equilibrium.)

Another point to note is that, as already mentioned, electrons in general should have

dierent particles like holes and

dierent Fermi levels associated with them. But

its not quite so. Right way is to forget about holes and assume in both bands there are only electrons. This clearly tells us that only one Fermi level is required. Sticking with two particles is usually not done as far as equilibrium distribution is concerned because in equilibrium two concentrations are not independent of each other. When we talk about chemical potential of electrons, its not only the chemical potential of electrons in conduction band but its for the entire semiconductor. Law of mass action tells us that the density of two kinds of particles are not independent. And in general if electrons ow is balanced then holes ow would also be balanced. Note that we are talking

about thermal equilibrium. Note that in semiconductor literature by holes distribution function we mean the probability of nding a hole which is same as probability of NOT nding an electron in an 'ELECTRON single-particle energy state'

Fermi Energy is the Fermi Level at zero absolute temperature and should not be
confused. In semiconductors Fermi Level depends very weakly on temperature and

hence these terms are usually confusingly used.

Part II

Special Methods for Calculating State Variables


In the introduction section we claimed that if both R and S are macroscopic systems and if R=S is a closed system then

GR GS

for one particular energy of the system S, say

U,
a

would

be a highly peaked function. So the system would only stay in the energy state

U-

bunch

of most probable state. But note that

would be degenerate. Now once we know the most

Mukul Agrawal

32
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

7: Partition Function/Gibbs Sum/Grand Sum

probable multi particle state we can go ahead and calculate

most probable distribution

function for the distribution of particles among dierent single particle energy states.
Note that the entire analysis assumes

non-interacting particles. Another important point

is to note that the entropy maximization always gives us the most probable state even if the system is not macroscopic. Although

GR GS

would not be sharply peaked. So we can

calculate dierent probabilities. But note that this probability distribution s only the most probable distribution.

Partition Function/Gibbs Sum/Grand Sum

In general calculating any macroscopic properties for any system consisting of either bosons or fermions is a formidable task if one starts from rst principle because one needs to have a good understanding of probability theory (for example calculation of average occupation number of boson system). But there exists a rather powerful and generic formulation through which it becomes trivially simple to calculate any of these properties. We start with a system consisting of only one single particles state and then dene for a single state system is trivial. Once for

Z , the grand sum.

Calculating

is known we have simple mathematical formulas

< U >, < N >,

etc. This is what I am explaining below.

Probability of nding the system S in any 'one particular state' with energy with with

(and R

U0 U ), particles N and volume V U0 U , N0 N , and V0 V .

is proportional to degeneracy of the states of R

PS (U1 , N1, V1 )/PS (U2, N2 , V2 ) = exp(R (U0 U1 , N0 N1 , V0 V1 )) /exp(R (U0 U2 , N0 N2 , V0 V2 ))


which can be simplied, by expanding exponential in Taylor series, to :-

PS (U1 , N1, V1 )/PS (U2, N2 , V2 ) = exp(U1 /, N1 /, V1 / ) /exp(U2 /, N2 /, V2 / )


or,

PS (U, V, N ) = exp(U/, N/, V / )/Z

Mukul Agrawal

33
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

7.1

Fermi Gas/Fermi-Dirac distribution

where,

is the sum of exponential over all possible combinations of

U ,V ,

and

N.

If any

states are degenerate the

Z Z

should contain that term as many number of times. In general

is called the

Grand sum. If

is constant then

is called the

Gibbs sum. If both

and

are constant then

is called the

partition function.

7.1 Fermi Gas/Fermi-Dirac distribution


Fermi Gas :- Ideal Fermi gas is non-interacting gas of Fermi particles. Fermi Dirac Distribution:- Consider any one single particle state (also called orbital). It
has only two possible energy states - one having one particle and other having no particle.

Z = 1 + exp( )/
Also< . N >= occupation index = 1+exp(1 )/ If one knows density of states

g (Esingleparticle )

then one can calculate all of the system state variables including the chemical potential involved in this expression.

7.2 Bose Gas/Boson Distribution


Bose Gas :- Ideal Bose gas is non-interacting gas of Bose particles. Bose Distribution :- Consider any single particle state :-

Z = (1 + exp(N N )/ ) = 1/(1 exp( )/ )


Using

(Z ) , < N >= 1/((exp( )/ ) 1). This is the famous Bose-Einstein < N >= log distribution. If one knows density of states g (Esingleparticle ) then can calculate all of the
system state variables including the chemical potential involved in this expression.

7.2.1

Photon Gas/Planck's Distribution

Photon Gas :- Ideal Photon gas is non-interacting gas of special Bose particles like
photons which have zero rest mass.

Planck Distribution :-Bose Einstein and Planck's distribution are, strictly speaking same.
Planck's distribution is the special case of the Bose Einstein Distribution for special Bosons like photons whose

number is not conserved. Planck's distribution can be obtained from

Bose distribution with zero chemical potential. Think of a cubicle cavity. There can be many

Mukul Agrawal

34
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

7.3

Ideal Gas/Boltzmann Distribution

QM single particle states possible . The general property of all Bosons is that there can be as many number of Bosons as possible in any one QM single-particle state . There is no

restriction. In case of radiations the QM state is same as the EM mode like TE/TM etc (note cavity ..dierent TE mode implies dierent prole all together and dierent

as well (if not

degenerate)). And more the number of photons in one mode the more intense the radiation of that particular mode is. (For simplicity assume that modes are not degenerate in that case each mode correspond to a unique radiation frequency). Now in a cavity lled with common bosons like He atoms:- (1) He atoms can interact with each other to convert a lower mode He to higher mode He and in the process exchange energy with surroundings or (2) exchange energy to excite electrons inside He to higher internal states or (3) exchange He atoms from surroundings. So basically I can always vary energy and particles independently. But for gas of

non-interacting Bosons like photons (1) and (2) is not possible. So exchanging

energy and particles become one and same thing. They cant be treated separately. If you exchange energy you exchange particles too. And vice versa. What it basically mean is that is that photons cannot interact with each other to convert the photon of one radiation mode into a photon of another radiation mode. That is I can't aord to send a photon into higher energy level ...... if I want more energy I have to get one more photon from outside. Now to obtain a distribution function just think of one orbital. Repeating the same derivation as above we note that summation over N disappears which nally results in a distribution in which there is no chemical potential. This is called Planck's Distribution. But why is the chemical potential zero for photons ? Because Read the intensive/extensive section.

is not a function of

N.

7.3 Ideal Gas/Boltzmann Distribution


Ideal Classical Gas :-Non- interacting gas in low density limit (n
density and is dened below. In semiconductors

nQ

is called

Nc

or

<< nQ , nQ is quantum Nv ) is referred as ideal

gas. That is gas of non-interacting particles in 'classical gas is always supposed to be

regime'. (Note that usually ideal

free gas but we are releasing that restriction.)

Boltzmann Distribution :- We can obtain two forms of Boltzmann distribution using


either partition function (multi-particle form) or using Gibbs function (single-particle form). In multi-particle-state form, Boltzmann distribution is the 'strictly' correct distribution and can be applied to both Fermi or Bose gas system. In the more commonly known single-

particle-state form, Boltzmann distribution is often called the 'classical distribution' and

Mukul Agrawal

35
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

7.3

Ideal Gas/Boltzmann Distribution

is interpreted as approximation of both Bose or Fermi distribution in low density limit

20

Important point to keep in mind is that there are only two kinds of particles bosons and fermions

21

. So only correct distributions, in single particle form, for a gas of identical particles

- whether atoms, electrons or photons - is either Bose Distribution or Fermi distribution. Both of these can be approximated by Boltzmann distribution in low density limit when it becomes highly unlikely to nd two particles in same single-particle state. I am giving multiparticle formulation of Boltzmann distribution just because in this form it can be viewed as strictly correct distribution and not just an approximation although its not always useful because calculating multi-particle state density is very very tough.

Multi-Particle-State Form

Suppose we take our system S as the entire semiconductor. The semiconductor is interacting with the external world only through a heat bath

22

. Now we can start from partition function

instead of Gibbs function because total number of electrons is constant. Note that the entire multi-particle system can be in many multi-particle energy states some of which might also be degenerate. Probability of nding the electron gas in one of these multi-particle energy state would be proportional to the Boltzmann factor that is negative exponential of the energy of this state. The proportionality factor is the reciprocal of partition function. This is most general statement of Boltzmann Distribution. In this sense Boltzmann distribution is exactly correct ..... and is correct even for multi-particle fermions and multi-particle bosons systems as well .... its not some approximate form of Fermi/Boson distribution. Partition function is basically the sum of all the Boltzmann factor corresponding to all the multi-particle states

20 Fermi/Bose distributions are never written in multi-particle forms.


statement is single particle forms.

Hence we can only justify this

21 Some people tries to confuse by saying that classical identical' particles follow Boltzmann and 'quantum

indistinguishable' particles follow either Bose or Fermi distribution. I think there is no dierence between indistinguishability and identicality ! heat bath.

22 In practice its radiation-recombination of photons which plays role of energy exchange with a ctitious
Photon exchange can be treated as particle exchange or just as energy exchange. If treated

as particles, then we take the temperature of radiation as innity and if treated simply as energy, then the chemical potential of non-interacting photon gas is taken as zero. This is necessary because energy or particle exchange is one and the same process and we can not include both separately in our equations. We would, here, take the energy approach. This makes sense because electron gas can also exchange energy with photon gas, hence we have to bring in the concept of temperature. Whereas, since dierent particles needs to be given dierent chemical potential we can simply assign

=0

to photons. Which is same as saying that the

of semiconductor is not a function of number of photons present inside rather its just a function of entropy

of electrons. Total number of electrons is kept constant. Hence the internal energy of the semiconductor can only change because of the change in the entropy and hence photons are not be considered as particles and just as 'energy'.

Mukul Agrawal

36
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

7.3

Ideal Gas/Boltzmann Distribution

that entire electron gas can take. In case of degeneracy, same Boltzmann factors are added as many times. Now, the theoretical algorithm to calculate the partition function is to rst calculate the 'density of multi-particle states' by whatever method you want and then to convert the sum into integral. Now calculating the density of multi-particle states is an

awesome task. This calls for approximations and what we get after approximation is usually correct only in low density limit. Boltzmann distribution. Lets consider a simple case where 'gas' consists of only one electron. In such a case Its more usual to call this approximate distribution as

we can easily calculate the single-particle density of states using elementary QM ideas and then we can evaluate the integral to calculate the single-particle partition function

Z1 =

nQ V = nQ /n.

Now suppose we rst assume that all electrons are distinguishable (no Pauli's

Exclusion!). To understand the mathematical trick, assume there are just two particles and just two single-particle states

E1 and E2 are available. Then Z2 = exp((E1 + E1 )/ ) + exp((E1 + E2 )/ ) + exp((E2 + E1 )/ ) + exp((E2 + E2 )/ ). Which can also be written as Z2 = (exp(E1 / ) + exp(E2 / )) (exp(E1 / ) + exp(E2 / )). Which is same as Z2 = (exp(E1 / ) + exp(E2 / ))2 . Where rst variable is always for rst particle and
second variable for second particle. Generalizing the above mathematics, its easy to realize that multi-distinguishable-particle partition function would be

N Z = Z1

where

is the

number of particles. But the actual electrons are not distinguishable. In the above sum  (A=E1 and B=E2) (which means that particle A is in state E1 and particle B is in state E2) and (A=E2 and B=E1) are counted as two degenerate states and hence summed twice. But for actual electron this represents a single energy state of energy (E1+E2) with no degeneracy. Now there is an approximate method to correct this.We can divide the entire of above argument by

2!.

But note the error. (A=E1 and B=E1) was counted just one and we

should not have divided this by two. One can quickly conclude that the above division would be correct if two particles are not allowed to go into the same state. case above multiplication and power of Be careful! In that

would become wrong ! But still division by

N!

is

approximately correct when two particles cannot go into same single-particle-state (that is low density limit). So nally

N ZN Z1 /N !

for low density limit. Hence

P ()

N! N exp(). Z1

This is called the Boltzmann distribution. Note that the

is the multi-particle energy and

not the energy of a single-particle state. We would derive a single-particle-state version of Boltzmann distribution below. We can relate

U ,F ,,

and other properties to either

or

g (Emultiparticle ).

Hence all of these can be evaluated.

Single-Particle-State Form
37
Fundamental Physics in Nano-Structured Materials and

Mukul Agrawal

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

7.3

Ideal Gas/Boltzmann Distribution

A more important distribution in case of non-interacting particles is to know what is the probability of nding a particle in some single-particle-energy-state. To get this distribu-

tion we can assume any one single particle state as our system S. Hence it can exchange both energy (photons) and particles (electrons) with surroundings to remain in thermal equilibrium at some constant temperature. The probability would be proportional to the

Gibbs factor (exponential of energy and number of particles) and the case when single particle state is taken as a system, calculating Gibbs sum is trivial. having Fermi's Distribution or Boson's distribution. So we end up either

Both of which can be approximated

to Boltzmann's Distribution noting that in low density limit the probability of nding a particle in any of the single particle state is expected to be much much smaller than one because number of particles are much smaller than number of available states. So

exp(( )/ >> 1).


ing that

One can rewrite Boltzmann distribution without chemical


notBut

potential also by noting that

N =

N/Z1 = exp(/ ). This last statement is easy to prove P (singleparticle ) = exp(/ ) exp(singleparticle / ) = exp(/ )Z1 .

writing Boltzmann distribution without chemical potential is unnecessary and its usually not used in semi-conductor literature. If one knows density of states

g (Esingleparticle )

then one can also calculate all of the

system state variables like entropy, energy, chemical potential etc starting from this single particle formulation also. Usually this form of calculations using the single-particle form are easier than the previous one using multi-particle-form that's why this form is more popular. For  free ideal gas one can easily show (Note that known.):-

g (Esingleparticle )

for free particles is well

pV = N U = (3/2)N = N (ln(n/nQ )2 + 5/2) = ln(n/nQ ) nQ = (M /2 2 )3/2


Classical Equi-Partition

Its interesting to note that, in classical statistical mechanics, continuum of single particle energy states are assumed and

Z1 is

calculated by integrating in momentum space. Which

gives us the law of equipartition because the integral simply changes by a factor if we change the degrees of freedom. Our above QM calculation of density of states is correct only for mono-atomic system whose internal states do not change. In case of other degrees of freedom

Mukul Agrawal

38
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

appropriate QM quantization has to be applied to see what would be the new density of state. In which cases the classical equipartition would be approximately correct is dicult to say. All we can say is that process is simply not correct and QM should be followed.

Part III

Thermodynamics from Quantum Statistical View Point


8 First Law
dU = qrev + Wrev = qirrev + Wirrev
There is frequent tendency of using to be very careful in using theses. rev/irreversible discussions), using of temperature discussed above,

pdV , d , dN

etc for

and

q .

But one needs

For reversible process (see the second law below for

U = U (, V, N, q = charge, etc) and using the denition qrev = d and Wrev = pdV + dN + dq + etc. Note Wirrrev = pdV + dN + dq + etc
Also note that even though

that these relations for heat and work are valid only for reversible changes. Although, one can still write a similar expression for irreversible work

but here pressure etc are not for the system. Also note that reversible work and heat are path independent although we still stick with

notations.

volume, number of particles, charge etc might remain constant it is possible to perform irreversible work on a system. For example when a running ball stops due to friction, it does perform work on the surroundings. Although all independent extensive parameters except

of surroundings remains constant. System S does work on R to increase R's internal energy and decreasing its own. Because of the increase in

U,

R's entropy also increases. Whereas

ball being a pure mechanical system, its entropy remain constant. In this particular case

dUR = WRirrev = dR

. Heat exchange is zero because of constant entropy.

Mukul Agrawal

39
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

9: Second Law

Second Law
for reversible change and

dR+S = 0, R+S = max

dR+S > 0

for irreversible (spontaneous)

change is the statement of second law (in statistical mechanics its concluded logically from fundamental assumption). There are many other equivalent statements of second law.

9.1 Reversible/ Irreversible Processes - Stray Comments

All innitesimal processes should not be dened as reversible. This is rather confusing statement although may be correct in some sense. So do not go by such denitions. Use the most generic criteria of reversibility of zero total entropy change. Whenever innitesimal shift is done around some equilibrium situation it would be reversible.

One should not confuse between isentropic and adiabatic process. They are not same in most general situation. In adiabatic process heat exchange is zero (while energy

exchanged might be nonzero) and hence the entropy of the 'source' of energy is constant although entropy of receiving system might change. For example compression of ideal gas in a adiabatic chamber does change the entropy of the gas. Whereas an isentropic process is dened to one in which the entropy of system of interest S remains same. Entropy of reservoir R might change. So in concise, adiabatic processes mean either

dR = 0 or dS = 0 (depending on which one is the source of energy) whereas isentropic processes mean dS = 0.
Reversible processes (dR+S

= 0) are not same as the adiabatic processes (either dR =

or

dS = 0)

in most general situation. There exists both reversible and irreversible

adiabatic processes. For example innitesimal isothermal deviation from equilibrium is reversible but not adiabatic.

Reversible processes (dR+S most general situation.

= 0)

are not same as isentropic processes (dS

= 0)

in

Most general reversibility criteria is that the total change For example any innitesimal change from

in the entropy of R+S should be zero.

any equilibrium situation is reversible but, in general, not an isentropic process. For example in case of isothermal compression, assume that for the time being a piston which divides a chamber of gas in two parts is in with a heat bath. Now shift the

piston by some innitesimal amount. Resulting change would be an reversible change

Mukul Agrawal

40
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

9.2

Work and heat in irreversible process

because total entropy change of R+S would be zero but it is not an isentropic change because the change in entropy of S alone is nite.

Note that isentropic processes and reversible processes are same only when either the process is adiabatic (assuming source of energy is external agent) or when external agent is isentropic. Reversible adiabatic processes are isentropic processes. Whereas reverse of this statement is not true that is all isentropic processes are not reversible adiabatic processes. Similarly a reversible change under the inuence of an isentropic reservoir would be an isentropic process.

9.2 Work and heat in irreversible process


One can easily conclude

Wirrev > Wrev

by noting that

Pext,irrev > Pint,irrev .

Hence,

Qirrev < Qrev .

Although I have proved these only for

P dV

work its generically true.

9.3 Equivalent Heat Engine Statements


Now suppose we want to convert heat into work in some reversible cyclic manner. We want to absorb heat from one heat bath that increases my energy and entropy. And then perform some work that takes out my energy but not entropy. So if I want to start second cycle I need to give away entropy also. There is no way to do that other than exchanging some

more energy. So actually what we do is ..... we do not convert the entire heat, that we rst absorbed, into work. We preserve some. And then we give of entire entropy with this small amount of energy to a heat bath at lower temperature in form of heat. Smaller energy can accompany the same change in entropy because temperature of second heat bath is smaller. The above cyclic procedure tells us that its not possible to convert the absorbed heat completely into work in cyclic way. To restore entropy every cycle I have to give back some energy in form of 'wasted' heat to a second heat bath. If we keep the two temperatures in the above cyclic procedure to be same than above eciency (work converted/heat absorbed) would be maximum possible as the process is reversible. A reversible process is dened in which the entropy of the R+S remains same. In the above reversible process when we absorbed energy entropy of heat bath reduced whereas entropy of the system increased by the same amount. Now if the work conversion process is a bit irreversible then the entropy of the system S would increase a bit further. Note that pure mechanical system do not have any entropy so entropy cannot go out to the surroundings

Mukul Agrawal

41
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

10: Variations of Second Law

with pure mechanical work.

Now we need to throw out bigger amount of entropy to the

lower temperature which would need more accompanying heat so we should be careful to extract smaller amount of work. So eciency in irreversible process would be smaller.

10

Variations of Second Law

The most general statement of second law predicts the direction of the spontaneous change depending upon the change in the direction

entropy of R+S. It would have been nice if we can predict

just by studying the properties of S alone. This can be done under special

considerations:-

10.1 Entropy Maximization


A

closed system would be in equilibrium (equilibrium within sub-parts, no interaction from

external world.) if and only if it entropy is maximum. If some innitesimal change among states of sub-parts increases the entropy of the total system then that innitesimal process would take place irreversibly (spontaneously).

dR+S > 0

for spontaneous change.

10.2 Energy Minimization


If the

entropy of the system S is xed, for example in a pendulum, the entropy of

pendulum itself is xed - as it slows down entropy of R increases, and the process is isothermal and there is no heat exchange between R and S because entropy of S is constant. If all other independent extensive properties except entropy of R is constant then

dUR = dR = dUS .
process.

So for spontaneous change

dR+S = dR > 0

and

WRirrev = hence US should

decrease. Its same as saying that system would try to minimize its energy for spontaneous

10.3 Helmholtz Free Energy Minimization


F = F (, V, N, q, etc) = U .
Think of a system which is in thermal equilibrium with a heat bath at a constant temperature. The

system cannot exchange volume, particles,

charge etc with the surroundings. The internal sub-parts of the system can do all sorts
of juggling - they can exchange particles, volume, charge etc among themselves and the whole system can exchange 'heat' with the surroundings to keep the temperature constant.

Mukul Agrawal

42
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

10.4

Enthalpy Minimization

Now, if such a system consisting of two parts is in equilibrium then any small deviation from equilibrium would be identied as the reversible processes. Intuitively we know that if such a system is in equilibrium then pressure and chemical potential of the two sub-parts would be same. And, hence both -p1 dV1 denote sub-parts. Note that

p2 dV2 and 1 dN1 + 2 dN2 would be zero (Indices dV1 + dV2 = 0 and dN1 + dN2 = 0). But if system is not in
In

equilibrium then pressure and/or chemical potential of two sub-parts won't be same. such a situation we can conclusively say that both -p1 dV1

p2 dV2

and

1 dN1 + 2 dN2

would

be negative in the direction of spontaneous change. Please note here that for spontaneous change above sum does not represent work done by the system. One should note here is that such a system cannot do any work on the surroundings - neither in reversible nor in spontaneous change. We can also prove the above inequality in a more mathematical manner. Most important point that one should notice here is that in case when

system cannot perform work

on the surroundings, one can calculate the entropy change of the surroundings knowing
only the energy exchange.

dUR = dR = dUS

(for both reversible and spontaneous

change as volume, N, q etc of reservoir are constant.) We know that for spontaneous change

dR + dS > 0.

In this case, this can also be written as

an isothermal system S do not perform any work then

d(US S ) < 0. And hence if dF 0. So for spontaneous process T = const


and

change in free energy would be negative and it becomes zero at equilibrium. Which same to say that free energy tries to minimize for system alone if

W = 0.

10.4 Enthalpy Minimization


For

isobaric, adiabatic, and work on

P dV

WotherthanP dV = 0 (system can not perform other surroundings) enthalpy H = H (, p, N, q, etc) = U + P V of system

than
alone

minimizes.

10.5 Gibbs Free Energy minimization


For

isothermal, isobaric, and work on

P dV

WotherthanP dV = 0 (system can not perform other than surroundings) free Gibbs energy G = G(, p, N, q, etc) = U + P V of

system alone minimizes.

Mukul Agrawal

43
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

11: Third Law

11

Third Law

I believe third law is not as fundamental as other two because now a days one can make systems that do not obey third law, I am not hundred percent sure though. In statistical mechanical language this law basically restricts the functional form of The

and

dependence.

plot although depends on the material properties, for most of the realistic

materials it happens to be of form shown above in the section of chemical potential with a positive slope for all energies. Hence, no negative temperatures are allowed and monotonic increase in energy as well as entropy is enforced. Negative temperatures do exist and I won't go into those details! Check the appendix of Kroemer and reference of Ramsey's paper

therein if you are interested. It can be considered as one

additional basic postulate of

statistical mechanics (besides macroscopic system assumption ('huge' number of available states) and fundamental assumption) that the entropy is always single-valued, monotonic, continuous, dierentiable and zero-starting-with-innite-slope function of energy (third law of thermodynamics). By zero-starting I mean to say that

plot always starts from

=0
obtain

for whatever

and the slope at this starting point is always innite (

this postulate one can easily conclude that one can invert a given

= 0). From = (U, V, N, q, etc) and

U = U (, V, N, q, etc)

and vice-versa, a fact that we tacitly assumed above in the

article while writing about the fundamental relation and denition of intensive properties.

12

Fluctuation Dissipation Theorem


Q
be some operator whose eigen

Let system has some spontaneously uctuating force. Let states and eigen values are

|qi

ai |qi

. Let

Q = Q0 = (Q Q ) Q2 .
2
.

qi . 2 i |ai | qi
and

Let system be in the

nth

energy eigen state

|En =

be its expectation value in state

|En

. What is the

mean square deviation from mean value? It would write this as

|ai |2 (qi Q0 )2 .

One can symbolically

If so happen that the expectation value of

Q0 = 0 i
d dt

then the mean square deviation from mean value would be

Q is zero, that is 2 2 i |ai | qi which is same

as the mean value of

Q = [Q, H ]

Mukul Agrawal

44
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

Part IV

Further Resources
13

Reading
Greiner's textbook[1] on statistical mechanics and thermodynamics, Kromer's textbook[2] on thermal physics and Feynman's book[3] on statistical mechanics are other good references on this subject.

Some useful mathematical background can be found in one of the articles [4] written by the author or [5] or the wikipedia pages [6].

For a brief review of postulatory nature of quantum mechanics see another article [7] or following references [8, 9, 10]. Symmetries in physical laws and linearity of quantum mechanics are discussed in related articles [11, 12]. References [13, 14, 15] also provides good discussion about symmetries. Review of quantum eld theory (QFT) can be

found in the reference [16] or [17, 18, 19] while an introductory treatment of quantum measurements can be found here [20] or here [21, 9].

A good discussion on equilibrium quantum statistical mechanics can be found in references [2, 1] while an introductory treatment of statistical quantum eld theory (QFT) and details of density matrix formalism can be found in the references [22, 23]. A

brief discussion of irreversible or non-equilibrium thermodynamics can be found here [24, 25].

The electromagnetic and optical properties of nano-structured materials and devices are dicussed in the reference [26].

To understand relationship between magnetism, relativity, angular momentum and spins, readers may want to check references [27, 28] on magnetics and spins. Some

details of electron spin resonance (ESR) measurement setup can be found here [29, 30].

Electronic aspects of device physics and

bipolar devices are discussed in [31, 32, 33,

34, 35]. Details of electronic band structure calculations are discussed in references [36, 37, 38] and semiclassical transport theory and quantum transport theory are discussed in references [39, 40, 41, 42].

Mukul Agrawal

45
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

REFERENCES

List of all related articles from author can be found at author's homepage.

References
[1] W. Greiner, L. Neise, H. Stcker, and D. Rischke,

Thermodynamics and Statistical

Mechanics (Classical Theoretical Physics)


[2] C. Kittel and H. Kroemer, (Cited on page 45.) [3] R. P. Feynman,

(Springer, 2001). (Cited on page 45.) (W. H. Freeman, 1980).

Thermal Physics (2nd Edition)

Statistical Mechanics: A Set of Lectures (Advanced Book Classics) Fundamental Physics in Nano-Structured

(Perseus Books Group, 1998). (Cited on page 45.) [4] M. Agrawal, Abstract Mathematics, in

Materials and Devices

(Stanford University, 2002). URL http://www.stanford.edu/

~mukul/tutorials/math.pdf . (Cited on page 45.) [5] E. Kreyszig,  Advanced engineering mathematics, (1988). (Cited on page 45.) [6] C. authored,  Wikipedia, URL http://www.wikipedia.org. (Cited on page 45.) [7] M. Agrawal, Axiomatic/Postulatory Quantum Mechanics, in

Fundamental Physics in

Nano-Structured Materials and Devices

(Stanford University, 2002). URL http://www.

stanford.edu/~mukul/tutorials/Quantum_Mechanics.pdf . (Cited on page 45.) [8] A. Bohm, page 45.) [9] J. von Neumann,

Quantum Mechanics/Springer Study Edition

(Springer, 2001).

(Cited on

Mathematical Foundations of Quantum Mechanics

(Princeton Univer-

sity Press, 1996). (Cited on page 45.) [10] D. Bohm,

Quantum Theory

(Dover Publications, 1989). (Cited on page 45.) in

[11] M. Agrawal,

Symmetries in Physical World,

Fundamental Physics in Nano2002). URL http://www.

Structured Materials and Devices

(Stanford University,

stanford.edu/~mukul/tutorials/symetries.pdf . (Cited on page 45.)

Mukul Agrawal

46
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

REFERENCES

[12] M. Agrawal, Linearity in Quantum Mechanics, in

Fundamental Physics in Nano2003). URL http://www.

Structured Materials and Devices

(Stanford University,

stanford.edu/~mukul/tutorials/linear.pdf . (Cited on page 45.) [13] R. P. Feynman, R. B. Leighton, and M. Sands,

The Feynman Lectures on Physics, The Denitive Edition Volume 3 (2nd Edition) (Feynman Lectures on Physics (Hardcover))
(Addison Wesley, 2005). (Cited on page 45.)

[14] H. Goldstein, C. P. Poole, and J. L. Safko, Wesley, 2002). (Cited on page 45.) [15] R. Shankar,

Classical Mechanics (3rd Edition)

(Addison

Principles of Quantum Mechanics

(Plenum US, 1994). (Cited on page 45.)

[16] M. Agrawal, Quantum Field Theory (QFT) and Quantum Optics (QED), in

Fun-

damental Physics in Nano-Structured Materials and Devices


on page 45.) [17] H. Haken,

(Stanford University,

2004). URL http://www.stanford.edu/~mukul/tutorials/Quantum_Optics.pdf . (Cited

Quantum Field Theory of Solids: An Introduction An Introduction to Quantum Field Theory

(Elsevier Science Publish-

ing Company, 1983). (Cited on page 45.) [18] M. E. Peskin, (HarperCollins Publishers,

1995). (Cited on page 45.) [19] S. Weinberg,

The Quantum Theory of Fields, Vol. 1: Foundations

(Cambridge Univer-

sity Press, 1995). (Cited on page 45.) [20] M. Agrawal, Quantum Measurements, in

Fundamental Physics in Nano-Structured

Materials and Devices

(Stanford University, 2004). URL http://www.stanford.edu/

~mukul/tutorials/Quantum_Measurements.pdf . (Cited on page 45.) [21] Y. Yamamoto and A. Imamoglu,  Mesoscopic Quantum Optics, Mesoscopic Quantum Optics, published by John Wiley & Sons, Inc., New York, 1999. (1999). page 45.) [22] M. Agrawal, Non-Equilibrium Statistical Quantum Field Theory, in (Cited on

Fundamental

Physics in Nano-Structured Materials and Devices

(Stanford University, 2005). URL

http://www.stanford.edu/~mukul/tutorials/stat_QFT.pdf . (Cited on page 45.)

Mukul Agrawal

47
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

REFERENCES

[23] A. A. Abrikosov,

Methods of Quantum Field Theory in Statistical Physics (Selected Russian Publications in the Mathematical Sciences.) (Dover Publications, 1977). (Cited Fundamental Physics in

on page 45.) [24] M. Agrawal, Basics of Irreversible Thermodynamics, in

Nano-Structured Materials and Devices

(Stanford University, 2005). URL http://www.

stanford.edu/~mukul/tutorials/Irreversible.pdf . (Cited on page 45.) [25] N. Tschoegl,

Fundamentals of equilibrium and steady-state thermodynamics

(Elsevier

Science Ltd, 2000). (Cited on page 45.) [26] M. Agrawal, Optical Modeling of Nano-Structured Materials and Devices, in

Funda-

mental Physics in Nano-Structured Materials and Devices

(Stanford University, 2007).

URL http://www.stanford.edu/~mukul/tutorials/optical.pdf . (Cited on page 45.) [27] M. Agrawal, Magnetic Properties of Materials, Dilute Magnetic Semiconductors, in

Magnetic Resonances (NMR and ESR) and Spintronics,

Fundamental Physics

in Nano-Structured Materials and Devices

(Stanford University, 2003). URL http:

//www.stanford.edu/~mukul/tutorials/magnetic.pdf . (Cited on page 45.) [28] S. Blundell,  Magnetism in condensed matter, (2001). (Cited on page 45.) [29] M. Agrawal, Bruker ESR System, in

Fundamental Physics in Nano-Structured Mate-

rials and Devices

(Stanford University, 2005). URL http://www.stanford.edu/~mukul/

tutorials/esr.pdf . (Cited on page 45.) [30] C. Slitcher,  Principles of Magnetic Resonance, Springer Series in Solid State Sciences

1 (1978). (Cited on page 45.)


[31] M. Agrawal, Device Physics, in

Fundamental Physics in Nano-Structured Materi-

als and Devices

(Stanford University, 2002). URL http://www.stanford.edu/~mukul/

tutorials/device_physics.pdf . (Cited on page 45.) [32] M. Agrawal, Bipolar Devices, in

Fundamental Physics in Nano-Structured Materi-

als and Devices

(Stanford University, 2001). URL http://www.stanford.edu/~mukul/

tutorials/bipolar.pdf . (Cited on page 45.) [33] R. F. Pierret, page 45.)

Semiconductor Device Fundamentals

(Addison Wesley, 1996). (Cited on

Mukul Agrawal

48
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

REFERENCES

[34] R. F. Pierret,

Advanced Semiconductor Fundamentals (2nd Edition) (Modular Series on Solid State Devices, V. 6) (Prentice Hall, 2002). (Cited on page 45.) Physics of Semiconductor Devices
(John Wiley and Sons (WIE), 1981). (Cited

[35] S. Sze,

on page 45.) [36] M. Agrawal, Electronic Band Structures in Nano-Structured Devices and Materials, in

Fundamental Physics in Nano-Structured Materials and Devices


page 45.) [37] N. W. Ashcroft and N. D. Mermin, page 45.) [38] S. L. Chuang,

(Stanford University, (Cited on

2003). URL http://www.stanford.edu/~mukul/tutorials/valanceband.pdf .

Solid State Physics

(Brooks Cole, 1976). (Cited on

Physics of Optoelectronic Devices (Wiley Series in Pure and Applied

Optics)

(Wiley-Interscience, 1995). (Cited on page 45.)

[39] M. Agrawal, Classical and Semiclassical Carrier Transport and Scattering Theory, in

Fundamental Physics in Nano-Structured Materials and Devices

(Stanford Univer-

sity, 2003). URL http://www.stanford.edu/~mukul/tutorials/scattering.pdf . (Cited on page 45.) [40] M. Agrawal, Mesoscopic Transport, in

Fundamental Physics in Nano-Structured Mate-

rials and Devices

(Stanford University, 2005). URL http://www.stanford.edu/~mukul/

tutorials/mesoscopic_transport.pdf . (Cited on page 45.) [41] M. Lundstrom, on page 45.) [42] S. Datta,

Fundamentals of carrier transport

(Cambridge Univ Pr, 2000). (Cited

Electronic transport in mesoscopic systems (Cambridge Univ Pr, 1997). (Cited

on page 45.)

Mukul Agrawal

49
Fundamental Physics in Nano-Structured Materials and

Cite as: Mukul Agrawal, "Statistical Quantum Mechanics", in


Devices

(Stanford University, 2008), URL http://www.stanford.edu/~mukul/tutorials.

Potrebbero piacerti anche