Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Statistical Physics
Statistical Physics
Statistical Physics
Ebook897 pages8 hours

Statistical Physics

Rating: 3.5 out of 5 stars

3.5/5

()

Read preview

About this ebook

Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.
Designed for a one-year course of instruction for non-specialist graduate students, or advanced undergraduates, the book is divided into three parts. Principles of Statistical Thermodynamics (Part I) covers the first and second laws of thermodynamics, elementary statistical methods in physics, and other topics, including an especially clear and enlightening discussion of thermodynamic potentials and their applications.
Part II, devoted to equilibrium statistics of special systems, offers excellent coverage of the imperfect gas, lattice dynamics, the statistics of semiconductors, the two-dimension Ising model, and a particularly lucid chapter on dilute solutions. Moreover, the treatment of topics in solid state physics is more extensive than is usually found in books on statistical mechanics.
Kinetic theory, transport coefficients, and fluctuations comprise Part III, with a fine presentation of the Kac ring model; the Boltzmann transport equation; kinetics of charge carriers in solids, liquids, and gases; fluctuations and Brownian motion, and more.
A liberal quantity of problems has been added to each chapter, including a special section of "recommended problems," whose solutions will insure an adequate understanding of the text. Solutions of all problems will be found at the back of the book along with a list of supplementary literature.
LanguageEnglish
Release dateAug 9, 2012
ISBN9780486156767
Statistical Physics

Related to Statistical Physics

Related ebooks

Physics For You

View More

Related articles

Reviews for Statistical Physics

Rating: 3.6666666666666665 out of 5 stars
3.5/5

3 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Statistical Physics - Gregory H. Wannier

    Index

    PART I

    Principles of statistical thermodynamics

    °1

    The first law of thermodynamics*

    If the various parts of physics were to be placed in logical order, with each part dependent on those preceding, but not those following, then mechanics should probably be placed first. This would be proper whether the mechanics considered were to be of the classical or the quantum type. The French educational system has emphasized this position of mechanics in the past by treating it as part of mathematics. Rational mechanics it is then called; it is developed deductively from a limited set of axioms just as any other branch of mathematics.

    There was a time when scientists and philosophers expected that all of physics, and even all of science, would finally be given axiomatic structure. This expectation has not been fulfilled. Each branch of science uses its own mixture of inductive and deductive reasoning, and the accumulation of knowledge in our own time has become much too rapid to allow the creation of a logical, all-embracing structure. We need flexibility to understand the world around us, and we gain such flexibility by having different viewpoints stressing different aspects. This flexibility is not easily reconciled with a rigid axiomatic framework, such as may be found in a traditional course on rational mechanics.

    Statistical physics stands at the borderline of deductive science. People have tried to axiomatize parts of it and have had a fair success in doing so. However, not all conceptual difficulties have been eliminated completely. It is true in particular that some general theorems depend for their validity on complicated unphysical assumptions. In practice, the results of the theory are well verified, in spite of the weaknesses mentioned. It is therefore advisable in a first study of statistical physics to touch on foundation problems only lightly and to employ whatever reasoning seems most efficient. This procedure occasionally implies a shift of viewpoints which borders on the inconsistent. Thus, for instance, we shall sometimes consider our objects quantum systems so as to make the states countable. At other times we shall turn around and treat the energy as a continuous variable to allow differentiation with respect to some parameter. The student should learn early to take these shifts in his stride. By allowing them we gain an efficient and yet essentially correct approach to the subject. It goes without saying that the correspondence with physical reality has to be watched carefully in such a way of procedure.

    1-1. Systems and state variables

    Before we proceed with any theoretical developments it is appropriate to ask ourselves what the objects of study are in statistical physics. Statistical physics represents an effort to get away from the abstractions of mechanics and to approach more closely the realities of the world around us. Our objects are therefore to be inanimate but concrete things which we shall call systems. An iron bar, a block of ice, or a vessel filled with a gas mixture may serve as typical examples of systems. They are complicated and have many degrees of freedom. A complete description of such a system in the sense of rational mechanics is out of the question. We shall try to derive certain properties of these systems even though we cannot know their condition completely. This is where statistics comes into the picture. We hope to get results by averaging over parameters whose exact values are not known. The behavior in the large of our systems is thus seen as an average of some sort. The manner in which the average is taken is the central problem of statistical mechanics. We are not ready to consider it at this time. Instead, we shall try to indicate first the nature of the description attempted for a system.

    Let us take as an example the iron bar mentioned above which is typical of the general case. A certain number of parameters, finite or infinite, are needed to distinguish it from another, slightly different bar. We might want to know the dimensions, chemical composition, arrangement of crystallites, etc.; many of these parameters can vary from point to point—we may have a distribution of stresses and magnetizations, foreign inclusions, etc. However, if suitable care is taken, the number of these variables can be cut down considerably. We may study a single crystal under uniform stress, bring it into a uniform magnetic field, and we can see to it that its chemical composition is uniform. For some purposes the shape of a body is of no importance; for a magnet this is not necessarily the case, but if we assume it to be long in the direction of magnetization, other details will not matter.

    For such a magnet, we can say in a limited sense that its properties are determined by the pressure and the magnetic field in which it is found, and the mass numbers determining the composition: p, H, m1, m2, …. At least these are the electromechanical parameters which we can pick at will to suit our fancy.

    One variable has not been listed so far, namely temperature. Temperature is a concept which has no direct meaning in mechanics or electromagnetism. It is usually determined by making an auxiliary mechanical measurement such as the volume of a quantity of mercury under its vapor pressure. Thus temperature lacks direct status. Yet there is little exaggeration in saying that statistical physics is written around the temperature concept. Thus, at the moment, we might think of temperature as a concept, not fully legitimate, which derives from our feeling of hot and cold and whose gradient indicates the direction of heat flow in the case of thermal contact. It will be the business of statistical physics to elucidate the obscurities of this concept.

    Among the parameters describing our iron bar, those which can be varied easily by experimental manipulation are naturally preferred for examination. Thus it is true that the electrical conductivity of the bar is a function of chemical composition as well as of temperature. However, to vary the composition we shall have to make new samples each time while dependence on temperature may be checked by simpler means. Thus temperature t, pressure p, and magnetic field H have a preferred position as state variables for the iron bar even though we must agree that other variables have been ignored thereby. We assume that the conductivity σ of the bar stands in a functional relation to these state variables, so we can write

    Relations of the type (1.01) are rather casually written down. The implication is often made that no assumption is involved in writing them. If the state variables were giving a complete description this would be so. In the present instance this cannot possibly be the case because in a complete description the variable t is actually superfluous. Thus, from the point of view of the system the variables in (1.01) are selected even though we may not be able to control others. In physics this sort of situation can give rise to the phenomenon of hysteresis: because we have ignored important variables we find that the state of the system depends on past history, in addition to the chosen state variables. The example of the iron bar is very much to the point here. Iron is known for its hysteresis effects when subjected to a magnetic field. In Fig. 1.1 is drawn a magnetization versus

    Fig 1.1. Breakdown of the equation of state concept. Magnetization versus magnetic field in iron.

    magnetic field curve for iron in which hysteresis is prominent. The whole idea of a functional relation between the condition of the system and selected variables crumbles here. Hysteresis effects are much more common in nature than is often supposed. Other examples of such effects are: the inhibition of the growth of the stable crystalline phase in a rigid matrix; the inhibition of a chemical reaction because of a rate bottleneck, the finite difference between the force necessary to move an object in the direction +x, as compared to moving it in the direction − x. In practice, such effects must be carefully circumvented to make statistical considerations applicable.

    1-2. The equation of state

    We may assume for the following discussion that we deal with a system for which the hysteresis difficulty can be avoided. We deal then with a substance whose behavior is truly a function of the state variables. We can then call on the mathematical concept of a function of several variables to guide our thinking. We might wish to investigate whether the function is or is not continuous or differentiable. If it is, the derivatives can also be investigated in this manner. Another aspect which must be thought about is what properties of the material should be investigated in preference to others. The conductivity chosen above is scarcely a good starting point for understanding iron because Ohmic conduction is a frictional flow phenomenon of great complexity. We might therefore try to find a simpler property to start a description. A more central concept than electrical conductivity is the density ρ of a material. We might therefore wish to consider the function

    for our iron bar. Within this relation, the dependence of ρ on H, called the volume magnetostriction, is a small and rather specialized effect. We are moving toward more basic things if we consider only

    Relation (1.03) is considered characteristic of a chemical substance. It is called its equation of state. For an actual material the equation of state represents an abstraction, as compared to its real behavior. For solid bodies the abstraction is quite drastic because shape and all that goes with it is ignored. For fluids, on the other hand, whether liquid or gaseous, the description (1.03) is reasonable. We may conceive of a fluid in which (1.03) is a rigorously correct relation, to the exclusion of all other variables. We call such a fluid a perfect fluid. The perfect fluid is a useful concept for theoretical reasoning.

    One may wonder whether there are any instances in which the equation of state is expressible in analytic form. Such instances are of course exceptional. The one important case of an analytic equation of state is the relation which is asymptotically obeyed by very dilute gases. In the form (1.03) it reads

    in this formula is the chemical molecular weight; R is the universal gas constant:

    and T is the so-called gas temperature. It is distinguished from the standard centigrade temperature by a shift of the origin by 273.2°.

    It is worth while to reduce (1.04) to parameters which are significant on a molecular level. For this purpose we write

    Here N is the number of molecules present, m their mass, and V the volume in which the gas is contained. Similarly we write

    is Avogadro’s number, and k is an atomic constant which converts the gas temperature into energy units. It is called Boltzmann’s constant. Its numerical value is

    In all formulas of theoretical physics T appears only in combination with this constant. Substitution of expressions (1.06) into (1.04) yields

    This is an equation involving only pressure, temperature, and number density, that is, no individual property of the gas. It is fairly obvious that a universal equation of state such as (1.08) must have behind it a universal theory. This theory will be discussed at later occasions.

    A third form of (1.04) or (1.08), which is often used, refers to one mole of gas. N , and V . With (1.06c), (1.08) becomes then

    The molar volume of a dilute gas is thus independent of the nature of the gas. At 0° C and atmospheric pressure it equals

    0 at standard pressure and temperature. The law (1.09) is a limiting law valid at high dilution, and the foregoing conditions do not satisfy this criterion. It is very closely correct to say that at 0° C and a pressure of 10−6 atm one mole of most gases occupies 22.4 × 10⁶ liters. We idealize this situation by defining as a perfect gas or ideal gas a gas obeying relations (1.04)–(1.10) exactly. Real gases are then referred to as imperfect.

    The fact that gases come so close to obeying an analytic equation of state has naturally tempted many to find a modified equation of state for actual gases which incorporates their departure from perfection. The best known among them is the equation of van der Waals, which reads

    The constant b = b at even the highest pressures. The term a² expresses the idea of internal cohesion at moderate density; the pressure needed to hold a gas together at given temperature and density is reduced by this term. In addition the equation indicates the presence of complicated critical phenomena for fluids which will be discussed in the problems and in a later chapter.

    It should be no surprise to the reader that even a very ingenious equation of state, such as (becomes smaller. This leads to a parametric equation of state of the form

    B(T), C(T), D(T)… are referred to as the second, third, fourth, etc., virial coefficient. The expansion ignores the possibility of genuine singularities in the equation of state; it is, however, excellent for the handling of data for moderately dense gases.

    In most cases the equation of state is only known numerically in a limited part of its domain of definition. These empirical data are usually found to be differentiable, apart from discontinuities associated with phase changes. These discontinuities will be discussed in later chapters of this book. If we have the normal case, namely, that a relation between state variables is differentiable in a certain range, but if the analytic structure is not available, then it is still possible to use the differential form of the relation. Let us take as an example the equation of state (1.03) for a perfect fluid. It is customary to write its differential form as follows

    or equivalently

    In this equation, the coefficients α and B are themselves functions of t and p. B is always and α usually positive, α is called the coefficient of volume expansion and B the isothermal bulk modulus . These empirical quantities are thus identified as partial derivatives by (1.13).

    The choice of variables made in (1.03) and (1.13) is to some extent arbitrary. Their selection is natural for liquids and gases where the triad chosen forms a set which is relatively immune to other influences. In solids, one certainly can think of other stresses beside p . Correspondingly, one has to consider changes in shape as well as changes in density. Such modified relationships can also be brought into the differential form discussed above and will yield thereby definitions for other important empirical constants. As an example, let us write down the differential form of the equation of state for a wire pulled with a variable tensile force f. We get

    Here l is the length of the wire and q its cross section, β is the coefficient of linear expansion of the material and Yt its isothermal Young’s modulus.

    1-3. Large and small systems; statistics of Gibbs versus Boltzmann

    In the preceding two sections we adopted a viewpoint which places a laboratory-sized sample into the focus of interest. We have called it a system and tried to approach a description of its properties directly. This is the point of view which was introduced by Gibbs into statistical mechanics. We now must come back on this definition and present a second viewpoint. A large fraction of the systems in the sense used above, when studied in detail, can be broken down with more or less accuracy into independent subsystems down to the molecular level. The earliest cases considered by Maxwell and Boltzmann were in fact of this type. It is then possible to think of the individual molecule as the system; the sample under consideration becomes then an assembly of systems. It so happens that many formulas apply to either kind of system with only minor changes. However, there is a profound shift in the significance of the theory which must be explained for future reference. We shall take as a typical example the hydrogen molecule H2, which is the molecular subsystem of hydrogen gas. We shall assume the gas to contain only the hydrogen isotope of mass 1.

    . These spins are limited degrees of freedom capable of only two values. Apart from the spins the system is described by the variables appropriate for four point masses. This means twelve degrees of freedom, or twenty-four variables linked in pairs by the uncertainty principle. The state variables in this case are simply a full set of mechanical coordinates. To our knowledge, nothing more can be said about a hydrogen molecule. In particular, the temperature variable has no place in the basic description. Indeed if a hydrogen molecule is looked at individually there is very little need for statistics. Since a complete description is possible why deal with averages? In fact, averages don’t tell us much in such a case, for departures from predicted average values are large for systems of molecular size, as we shall see later.

    Statistics comes of course into the picture here when we are dealing with hydrogen gas rather than an individual hydrogen molecule. We shall see later that the reliability of a statistical prediction improves as the number of members for which it is made becomes large. In an ordinary container filled with hydrogen gas the number of molecules is very large, of the order 10²⁰. Thus, statistical methods are not only useful, they are necessary. In fact, the physical reason for employing statistics is the same whether the formal system reasoned on is laboratory-sized or molecular. It is always the large number of molecules in a sample that make statistical methods appropriate. The question whether the system reasoned upon does or does not consist of a large number of atoms is a purely formal one. In some cases we follow Maxwell and discuss a simple system, and find then that our sample consists of a large number of such systems. In other cases we follow Gibbs and reason on a system containing a large number of parts; statistics then applies actually to an individual sample. The Gibbsian device of taking a large number of large systems, called an ensemble, is a theoretical device without physical content and should be understood as such.

    There is a small number of phenomena in which the objects of study are neither laboratory-sized nor molecular. The classical case is Brownian motion. Here the fluctuation phenomena associated with molecules become observable on larger bodies. Another example is that of macromolecules as studied in biochemistry or polymer chemistry. This case is complementary to the first one. Macromolecules have so many degrees of freedom that some of their state variables have to be ignored in any observation. Concepts associated with large bodies, such as temperature, acquire thereby meaning for an individual molecule. We shall not elaborate on these exceptional though important situations in the following discussion.

    1-4. The First Law; heat, work, and energy

    The first law of thermodynamics is distinguished from energy conservation laws in general in that the nature of heat is involved. This recognition of the nature of heat is preceded, both logically and historically, by the more elementary recognition that heat is a quantity. We shall first say a few words about this elementary aspect of the question.

    The discovery that heat is a quantity belongs to the achievements of early physics; it is associated with the name of Joseph Black. If a heated copper block at 100° C is dipped into a beaker containing an equal weight of water at 20° C, the result is an adjustment of the whole not to a temperature of 60° C, but to a temperature much closer to 20°. One concludes from this that a thing called heat must be effective in this adjustment, and that the heat necessary to produce a temperature change of 1° in a body is not the same for different objects. In other words different bodies have different heat capacities C. Heat capacity is easily seen to be proportional to mass so that we can define the heat capacity per unit mass or specific heat c as typical of a substance. We thus find that the heat Q necessary to raise the temperature of a substance from t1 to t2 equals

    The above example then teaches us that the specific heat of copper is substantially smaller than that of water. A certain number of cross checks are of course necessary to verify that Q is an additive quantity which is preserved in simple heat exchanges. This is easily done by comparative experiments involving third or fourth bodies.

    It was natural for early physicists to define a heat content for materials, that is, to assume that the quantity Q is somehow stored in a body after transfer. Such a conception allows (1.15) to be written in differential form, namely,

    Equation (1.15) is thereby transformed into

    where Q(t) is now the heat content.

    The concept of heat content can be extended to include the processes of melting and evaporation by the writing of a formula of the type

    where l is called the latent heat of fusion. The fusion process occurs of course at a fixed temperature for a pure substance. Equation (1.18) expresses the notion that at equal temperature the liquid phase of a substance has a higher heat content than the solid one. The quantitative concept of heat is again checked in phase change experiments of all sorts.

    The more refined insight that heat is a quantity, but not a conserved quantity, is due to the fundamental experiments of Rumford and Joule. The former showed that heat can be produced in unlimited amounts by the performance of work, and the latter that there is an equivalence relation in such experiments. We may express this equivalence most conveniently in the form

    or in words

    Energy is conserved. Heat is a particular form of energy.

    The law of conservation of energy, generalized by the First Law, has been verified in innumerable experiments involving friction and cycles of various kinds. Cycles are experiments in which a system is finally returned to its original condition. They are particularly useful in clarifying the relationship of the First Law to the state variables defined in the preceding section. It can easily be shown that the total heat evolved in one cycle does not in general vanish. Only the sum of the heat and work does. There is therefore no such thing as a heat content depending on the state variables, and (1.17) is an incorrect equation. Insofar as (1.16) is the differential form of (1.17), it must also be considered incorrect. However, there is the phenomenon of heat passing from one body to another, and therefore (1.16) can be given a limited meaning. That kind of limitation is expressed in this book by the placing of a horizontal bar across the d of dQ as follows:

    the work done on a system. This quantity also need not vanish when the system is taken through a cycle. However, the First Law asserts that the total amount of energy furnished does vanish. This means that we have

    for any cyclic process. Let us call the quantity defined thereby the energy content E. We then have

    Here the d before E has no bar because the energy content is a function of the state variables of the system.

    It is useful at this point to pause and reflect what (in Equation (1.21). The fact that heat transfer is a form of work can be verified directly in certain limiting cases such as Brownian motion. The heat transfer results here at least in part from collisions with neighboring molecules and is visible as random work in the erratic zigzag motion of the particles.

    The implied notion of ignored coordinates has also an effect on the left- hand side of (1.21). As a rule, we are only incidentally interested in those parts of the energy E which are easily recognized as mechanical. In fact, typical experiments relevant in statistical physics go to some trouble to avoid the appearance of such terms. We may take the case of the Joule paddle wheel experiment as an example. The arrangement is shown in Fig. 1.2. While two weights descend slowly under gravity, they drive a paddle wheel in a container with water via connecting strings passing over two pulleys. The experiment is usually interpreted by excluding the weights from the system. Gravity simply provides a conveniently precise method of measuring the work done by the paddle wheel on the water. The container is provided with baffles for the specific purpose of preventing the water

    Fig 1.2. The Joule paddle wheel experiment as a typical experiment of statistical physics.

    as a whole from acquiring kinetic energy, that is, an easily recognizable form of mechanical energy. The result, a rise in the temperature of the water, could have been accomplished more easily by heating it. There must therefore be a hidden component of energy in the water which is closely linked to the temperature and which remains after all recognizable mechanical forms of energy have been subtracted out. This component is called the internal energy U. Thus for the Joule paddle wheel experiment the First Law takes the form

    and the experiment furnishes the mechanical equivalent of heat because a rise in U which is normally produced by heating is produced here by precisely measurable mechanical work. It must be remembered in the following that even though (1.22) is the equation generally employed, (1.21) is the one generally valid. To make (1.22) valid special precautions must be taken to avoid the production of kinetic, gravitational, and other forms of mechanical energy.

    The splitting off of the mechanical forms of energy to isolate an internal energy closely akin to the old concept of heat content is not always a feasible procedure. We may take as an example the change in size and shape of a body through the application of forces. In simple cases this can be handled through the theory of elasticity. A new form of potential energy, elastic energy, is defined; thereupon the deformation becomes a problem in conservative mechanics. This viewpoint has its limitations, as is well known: a deformation may be plastic rather than elastic; elasticity theory then breaks down. For us here it is more important to realize that even in the elastic range the separation of elasticity from thermal phenomena is not feasible. To take an extreme example, we may stretch a wire at a given temperature without doing any work. We simply heat it first so that thermal expansion brings it to the desired length. Thereupon we cool it in a clamped condition. The elastic energy comes here entirely from heat furnished. Even in more conventional experiments on elastic deformation the same idea applies. The elastic energy depends on the elastic constants, such as the bulk or Young’s modulus, as parameters; these parameters are temperature dependent. Thus if compression at one temperature is followed by expansion at another the original elastic energy stored may not be returned entirely. A splitting off of this form of energy from U is therefore impossible. Once a splitting is found impossible we are forced to go to the opposite extreme and treat elastic energy as an aspect of U. U is dependent on shape and volume as well as on temperature, and this secondary dependence is tested by deformation of the body.

    The internal energy U completes the triad of concepts built around the idea of hot and cold: temperature, heat, internal energy. It is also the bridge which links the field to mechanics. What has been said about heat and work applies even more to the internal energy. It is not different in principle from other forms of energy. It is simply the energy of the degrees of freedom which are ignored in our description of a large system. This energy is present in a random form and necessitates the use of statistical methods for its description. The idea that internal energy is reducible to other forms of energy was first enunciated by Mayer and Helmholtz as a hypothesis and is known as the mechanical theory of heat. According to this theory an object which has a temperature does not differ in principle from the objects of rational mechanics, and its internal energy is basically potential or kinetic energy of its mechanical degrees of freedom. The difference from mechanics arises in the way these degrees become manifest to the observer. The observer has no direct information concerning them; he is therefore forced to construct a theory which treats them statistically. The energy of these degrees of freedom is the internal energy. Temperature obviously has something to do with the abundance of this type of energy since it controls the sense of the heat exchange between two bodies in thermal contact. We cannot go beyond this point at the present time. After having implied that heat flow is an uncontrolled form of work, we turn around and say that heat flow can be controlled after all with the help of the temperature variable. We must continue for some time yet to employ temperature in this semiempirical manner.

    1-5. Precise formulation of the First Law for quasistatic change

    The First Law, being a form of the law of conservation of energy, has a very wide range of application. Its greatest usefulness arises, however, for a restricted class of states which we call equilibrium states.

    Ideally equilibrium is the state reached by a system if all forces have been permitted to act until mechanical equilibrium is reached, and all opportunities for heat flow have been exhausted. The system will then no longer change in time. In practice, we must use the concept in a very much less circumspect way. After a system has been prepared in a certain way it will relax toward equilibrium in a manner which can often be analyzed in terms of a series of relaxation times τ1, τ2, τ3, …. Some of these times are longer, some are shorter than the time of observation. If all times are either very much longer or very much shorter than the time of observation the equilibrium concept can be used in a relative sense: equilibrium is assumed for processes whose τi is short, while processes whose τi is long are ignored and replaced by a fixed constraint. This haphazard procedure actually works in a sufficient number of cases to be worth while as the basis of a scientific discipline.

    With the help of the concept of equilibrium a second concept is introduced, that of quasistatic change. Quasistatic processes are an abstraction and are best defined as a locus of equilibrium states. The idea behind the concept is that if a given process, say the compression of a gas, is carried out at a certain rate the pressure must at every instant be slightly larger than would be required to hold the gas at its momentary pressure and temperature. The pressure excess is necessary to ensure a measurable rate for the process. Similarly, a pressure less than the equilibrium pressure is necessary for expansion at a finite rate. From the point of view of precise formulation this over- or underpressure is a nuisance because the amount of it depends on frictional effects which are incidental to the main processes of compression and expansion. The concept of quasistatic change implies that this pressure excess or defect can be made small if the change is imposed slowly. Finally we can proceed mentally to a limiting situation in which the process of compression has become so slow that the pressure at any given instant is indistinguishable from the equilibrium pressure at the given temperature and volume. This pressure is given by the equation of state. Similarly, in a more complicated situation when we have a body described by a set of coordinates x1, x2, x3, … xn and the temperature, the magnitudes of the conjugate forces X1, X2, X3, … Xn occurring in a transformation are in general not clearly defined. But the equilibrium values of these forces are (usually!) well defined and the mental extrapolation of going to the quasistatic limit allows us to substitute these limiting values. The generalized force Xi thus becomes a function of the coordinates xi and the temperature. Mechanical processes become thereby conservative and reversible. It should not be forgotten, however, that in many cases quasistatic change does not bring about uniqueness of forces and reversibility. The hysteresis loop of Fig. 1.1 is an example of this; in these cases production of heat cannot even in principle be separated from mechanical action.

    in (1.22) is expressible in terms of the infinitesimal displacements of the coordinates xs. The coefficients in the expression are the conjugate forces Xs. The work term then takes the form

    An instance of a coordinate would be volume; its conjugate force is the negative pressure; work done on a system through variation of this variable equals

    In this connection, we can think of the equation of state as the relation which gives the force conjugate to volume as a function of the volume, with temperature thrown in as a parameter.

    It follows from these considerations that if the First Law is applied to a system constrained to move quasistatically, the expression (1.23) can be substituted into (1.22), and we obtain

    as a differential form. In perfect fluids, with volume as the only coordinate, the work term is given by (1.24), and (1.25) specializes into

    It goes without saying that the correct use of (1.25) depends on a correct assignment of conjugate forces to generalized coordinates. This is the business of mechanics or electromagnetism. In a general way, an understanding of this is presupposed in statistical physics. However, the systems discussed here are so much more complicated than those usually considered in mechanics that some old formulas have to be almost rederived to be believable. We may take as an example the work of magnetization of an iron bar. It is given by the formula

    where H is the field externally applied to the bar and M its magnetic moment. The formula is trivial if the system to be magnetized is thought of as an aggregate of freely orientable dipoles of fixed strength. A real system such as our iron bar is not that simple. In fact, Fig. 1.1 implies that an irreversible phenomenon is associated with magnetization. A more sophisticated proof of (1.27) is therefore desirable which makes less specific assumptions about the nature of magnetism. We shall now carry through such a proof.

    Assume that the system to be magnetized has the shape of a cylinder whose magnetization is at all times parallel to its axis. The magnetizing device shall be a permanent bar magnet which is not influenced by the system. Both objects shall be assumed lying along the x axis with their magnetization parallel to that axis ( Fig. 1.3). The dimensions of the system

    Fig 1.3. Magnetization of a sample by a permanent magnet.

    shall be small compared to those of the bar magnet. The magnetic field seen by the system will then be mainly in the x direction; this field will be denoted by H(x). As the system is approaching the bar magnet, starting from x = −∞, the attractive force exerted on it equals M(dH/dx). If it is the only force acting, the system will be accelerated toward the magnet. We prefer the process to proceed quasistatically. For this purpose a balancing force F equal to −M(dH/dx) must be applied to the system. This outside force does the following negative work

    where a is the value of x at which we arrest the approach of the two objects. The foregoing term is obviously quite different from (1.27), but there are two processes mixed up in the result: the magnetization of the system and the approach of the two bodies. To obtain the work of magnetization alone we must add a further step. We must withdraw the system to its original position at x = −∞ and retain its magnetic moment. To do this we must assume that the final magnetization can somehow be fixed in the magnet by some kind of magnetic glue (the remanence phenomenon in iron allows in fact something like it in practice). We then can withdraw the magnet so glued to x = −∞ and retain its moment. Positive work is now done by the balancing force F in the amount

    From this we get by addition

    as expressed in (in other cases.

    We see that the First Law abolishes the idea that heat is stored in a body, but retains the notion of heat transferred as a shorthand expression for random work done. Let us see what happens to the concept of heat capacity and specific heat in the course of this shift of viewpoints. The definition generally adopted retains the relationship between heat transfer and heat capacity. In other words one writes

    The formula is actually not the most general one conceivable because it assumes that a transfer of heat always produces a change in temperature. Transfer of heat for the melting or evaporation of a body is thus left out for the moment. Even so is not a perfect differential C from (1.28) and (1.22) and write

    Heat capacity is here seen to be reduced by work performed on the system. This implies that the heat capacity is increased if a system performs work at the same time as it acquires heat. The one definition that (1.29) lends itself to is that of an anergetic heat capacity, that is, heat capacity without the simultaneous performance of work. It is given by

    For some reason this possibility of a general definition is not much in use, and heat capacities are defined in connection with definite generalized coordinates. The most important case is that of a perfect fluid for which volume and temperature are the only state variables needed to determine pressure. One can then use the First Law in the form (1.26). Adopting V and t as state variables and writing

    one obtains

    The heat capacity at constant volume can thus be expressed as

    This heat capacity is a special case of the anergetic heat capacity defined earlier, namely, when the only type of work considered is a change in volume. Unless extraordinary precautions are taken an experiment measuring heat capacity will not measure this heat capacity, but rather the heat capacity at constant pressure Cp, which, from (1.31) and (1.32), equals

    We see from this that there are at least two different types of heat capacity and Cp which differ according to which side conditions are imposed during the heating process. Even more definitions exist if other state variables are introduced beside the volume.

    We shall follow the precedent of Section 1-2 and specialize the results obtained for the case of perfect gases. It is a consequence of the Second Law which we cannot verify at this time that a perfect gas not only obeys its special equation of state, but also has an internal energy which is a function of temperature only. Consequently, we have the relation

    If we insert this and (1.09) into (1.33) we obtain

    are the molar heat capacities or molar specific heats at constant pressure and constant volume, respectively, and R is the universal gas constant introduced in (arises entirely from the work of expansion. For other materials there is also a contribution to this difference from the volume dependence of the internal energy.

    A glance at the First Law in the form (1.25) shows that beside anergetic change, that is, change without work, we can also conceive the idea of adiabatic change, which is change without flow of heat. Adiabatic conditions are approached in practice either by careful thermal insulation or by a change of conditions in time which is sufficiently rapid to suppress heat flow. For a perfect fluid which is described by temperature and volume only, the differential equation for adiabatic quasistatic change is found from (1.26) as

    A more useful form of this relation results from (1.31), (1.32) and (1.33):

    which is a differential equation giving the volume as a function of temperature for adiabatic conditions. Since Cp is generally larger than , the slope of the volume versus temperature curve is the reverse of what it is at constant pressure. At constant pressure volume and temperature generally rise together, while under adiabatic conditions the temperature rises as the volume is reduced.

    For perfect gases Equation (1.37) is integrable because we have from (1.09)

    and therefore

    are constants. The differential equation then can be integrated in closed form and yields

    where we set

    Elimination of T from (1.38) and (1.09) yields then

    Since γ is a number larger than unity the pressure needed for a given amount of compression is larger under adiabatic conditions than it would be according to Boyle’s law. One way to see this is to compare the bulk modulus BT which results from Boyle’s law with the adiabatic bulk modulus Badiab. If we take differentials of (1.08) and compare with the definition (1.13) we find

    that is, the isothermal bulk modulus of a perfect gas equals its pressure. If we apply the same method to (1.40) we obtain instead

    which is a larger value.

    A. RECOMMENDED PROBLEMS

    1. Suppose we assume that air is a mixture of nitrogen gas (

    Enjoying the preview?
    Page 1 of 1