Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Molecules and Radiation: An Introduction to Modern Molecular Spectroscopy. Second Edition
Molecules and Radiation: An Introduction to Modern Molecular Spectroscopy. Second Edition
Molecules and Radiation: An Introduction to Modern Molecular Spectroscopy. Second Edition
Ebook854 pages6 hours

Molecules and Radiation: An Introduction to Modern Molecular Spectroscopy. Second Edition

Rating: 4.5 out of 5 stars

4.5/5

()

Read preview

About this ebook

This unified treatment introduces upper-level undergraduates and graduate students to the concepts and methods of modern molecular spectroscopy and their applications to quantum electronics, lasers, and related optical phenomena.
Starting with a review of the prerequisite quantum mechanical background, the text examines atomic spectra and diatomic molecules, including the rotation and vibration of diatomic molecules and their electronic spectra. A discussion of rudimentary group theory advances to considerations of the rotational spectra of polyatomic molecules and their vibrational and electronic spectra; molecular beams, masers, and lasers; and a variety of forms of spectroscopy, including optical resonance spectroscopy, coherent transient spectroscopy, multiple-photon spectroscopy, and spectroscopy beyond molecular constants. The text concludes with a series of useful appendixes.
LanguageEnglish
Release dateNov 9, 2012
ISBN9780486137544
Molecules and Radiation: An Introduction to Modern Molecular Spectroscopy. Second Edition

Related to Molecules and Radiation

Related ebooks

Chemistry For You

View More

Related articles

Reviews for Molecules and Radiation

Rating: 4.5 out of 5 stars
4.5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Molecules and Radiation - Jeffrey I. Steinfeld

    Index

    Preface to the Second Edition

    There are two principal objectives in this revised edition. First, I have attempted to correct as many as possible of the errors and misstatements that appeared in the first edition. Second, I have sought to bring up to date the discussion of new topics, particularly in chapters 10-14. Tremendous advances in spectroscopy, particularly in laser spectroscopy, have taken place in the past ten years, and any treatment of modern spectroscopy must reflect at least some of these developments. At the same time, I have tried not to lose sight of the lore and tradition of classical spectroscopy, which forms the basis for many of these advances.

    Many individuals have contributed suggestions, corrections, and new material for this edition; in particular, I would like to thank Professors W. Chupka and R. Hochstrasser for their detailed critiques. Professor R. Beaudet made it possible for me to spend a semester at the University of Southern California, during which the material appearing in chapters 11 and 12 was prepared for an advanced topics course. Doctor J. Francisco and Professors W. Klemperer and K. Innes read the manuscript of the revised edition and made many valuable suggestions.

    A note on references and cross references may be helpful to the reader: A cross reference to a section within a chapter is by section number; a cross reference to a section from one chapter to another is by chapter number and section number. For example, in chapter I a cross reference to the first section in that chapter is section 1, whereas in chapter 7 a cross reference to this same section is chapter 1.1. Also, a list of commonly used abbreviations for the journals included in the chapter references may be found in appendix E.4.

    Preface to the First Edition

    The fact that molecules possess quantized internal energy levels and that this enables them to absorb and emit electromagnetic radiation at discrete frequencies is one of the basic principles of chemistry. As such, the concepts of molecular spectroscopy appear in the curriculum at several different points. A descriptive treatment of atomic and molecular energy levels, without recourse to solutions of the Schrödinger equation, is usually included in first-year college chemistry or in advanced high school courses. These appear again, this time with the solutions for the particle-in-a-box, the harmonic oscillator, and the one-electron atom, in the typical undergraduate physical chemistry course. In graduate courses in molecular spectroscopy, one finds that these simple solutions are only first approximations to actual molecular eigenstates and that spectroscopy, in practice, is really an exercise in finding the best basis set for use in a perturbation expansion.

    It is from a course of this last type that this book has developed. A number of satisfactory texts exist at the elementary and advanced undergraduate level; this is not equally so when one looks for a single book by means of which graduate students can gain entry to current research literature as well as to the detailed manuals of Herzberg, Townes and Schawlow, and Condon and Shortley,¹ works by which practicing spectroscopists regulate their professional lives. There are several reasons for this. One is that Herzberg’s volumes themselves are basically phenomenological, and discussions of the problem of finding the representation in which the Hamiltonian operator is most nearly diagonal must be culled from the literature. Another is that much of the research effort in spectroscopy today is directed not at the traditional field of determining structural parameters, but at using quantitative spectroscopy, laser devices, and coherent optical phenomena to probe molecular dynamics. Books and monographs exist on many of these topics, but at the present time there seems to be no unified treatment suitable for use as a textbook.

    This, then, is the origin of the present book. It is the outgrowth of several iterations of a one-semester graduate course in molecular spectroscopy at the Massachusetts Institute of Technology, with supplementary material added. The emphasis of the course was on introducing students to the concepts and the methods of modern molecular spectroscopy so that the language would be familiar when the course proceeded to discuss quantum electronics, lasers, and related coherent and nonlinear optical phenomena. The course, and thus this book, also reflect the author’s prejudice that the most interesting areas of current spectroscopic research deal with the dynamics of the interaction of radiation and molecular matter, rather than with the more traditional subject matter of the tabulation and assignment of spectral lines and their interpretation in terms of molecular structures.

    There is surely more material in the present text than can be covered in a single one-semester course, but a good survey can be obtained by selecting portions of the text to cover in detail and using the rest for independent study. Clearly, the book has been designed for use in a graduate-level course with chemical physics orientation. However, it should also be appropriate for a small number of senior undergraduates who have already had the usual treatment of quantum chemistry found in the undergraduate physical chemistry course; they can use it either as a supplementary text or for self-study. No quantum mechanical background is assumed other than that found in such an undergraduate course; the mathematical background required is at a corresponding level. Some background in the elements of electromagnetic theory (Maxwell’s equations, etc.) will be helpful. When advanced concepts, such as a relativistically invariant Hamiltonian, group-theoretical manipulations, or the density matrix, are required, they are introduced in a heuristic way. This means that they are treated without complete mathematical rigor in most cases. Although the orientation of the book is to provide the foundation necessary for modern spectroscopic research in considerable detail, it does not get involved in the fundamental issues of radiation theory and quantum mechanics that lie behind it all. At the same time, we have attempted to avoid focusing solely on current research topics, appropriate for a review monograph that is expected to be obsolete in a few years, but certainly not for a textbook.

    There are many people who have helped bring this book to fruition, but a few should be singled out in particular. My thanks go to the students enrolled in these courses for their forbearance during the process of finding out the best way of expressing certain concepts, and to Professors R. C. Lord and S. G. Kukolich, who shared in the teaching of the course. I have also made free use of the problems originally prepared by Professor Lord and Professor Ian Mills. There are several places in the text at which the best way of presenting a particular topic seemed to be to make use of notes that had been prepared by other lecturers for other courses; these are indicated at appropriate points in the text, and my thanks go to the various individuals for permission to use this material. Special thanks are due Professor Kent R. Wilson of the University of California at San Diego, who made possible a pleasant stay at his campus, during which a large portion of the first draft of the book was written, and also to Professor C. Bradley Moore of the University of California at Berkeley for similar arrangements during which the final manuscript was prepared. Thanks are also due Ms. Sandra V. Sutton for her expert typing of the manuscript, to Professor Victor Laurie for a valuable first reading of the manuscript, and to B. Garetz and B. D. Green for additional proofreading and comments.

    1974

    A portion of the visible absorption spectrum of the I2 molecule, taken on an echelle spectrograph at MIT. Every line in the discrete part of the spectrum can be assigned and accounted for by the methods to be discussed in this book. [From J. D. Campbell, S.B. Thesis, MIT, June 1967. Reproduced with permission.]

    1

    Review of the Quantum Mechanical Background

    1 State Vector Notation

    If one were to ask for a definition of the basic operation in spectroscopy, the answer would probably be in terms of carrying out a measurement on a molecule, using electromagnetic radiation as a measuring tool. The essence of the quantum mechanical behavior of systems as small as atoms and molecules is that carrying out a measurement on such a system forces it to take on a sharp value of the one or more observables being measured. In spectroscopic measurements, this observable is almost always the total energy of the system. The measured quantity is the set of differences between the possible energy levels, which is related to an observed set of resonances in the electromagnetic radiation spectrum by the Bohr-Einstein law,

    where v is the frequency of the radiation, in hertz (the term cycles per second, or sec – 1, is no longer officially used); λ is the wave number of the radiation in reciprocal centimeters, or cm – 1.

    Energy and frequency are related by Planck’s constant h = 6.626174 x 10 – 27 erg sec, while frequency and wavelength are related by the speed of light in vacuum, c = 2.99792458 x 10¹⁰ cm/sec.²

    Since the concept of measurement is a key aspect of the whole, it seems appropriate to introduce the use of a quantum mechanical notation known as measurement algebra at this point. The primary advantage of this notation, as we shall see, is that it provides a very compact and handy way of expressing the matrix elements, and relations between them, in terms of which spectroscopic theory is formulated. It is well to remember that merely using a new notation, while it may be convenient, does not introduce any more content than was expressed by the more familiar language of wave functions and eigenvalues. Several classic quantum mechanics textbooks employ this notation, and the reader is encouraged to look at these as an aid in becoming familiar with this algebra. Especially recommended are the texts by Messiah, Gottfried, and Feynman, Leighton, and Sands (references 1-3).

    Figure 1.1

    Schematic representation of a Stern-Gerlach experiment. A beam in a j = 1 state enters from the left through a magnetic field inhomogeneous in the z direction. It is split into three jz components; the +1 component is selected and sent through a magnetic field inhomogeneous in the x direction, whereupon it splits up into three jx components. If one of these components is then sent through another z magnet, it will again split up into three jz components, despite the earlier z measurement.

    The basic object of this algebra is the state of a system, which is given the symbol |n〉. This state may correspond to an energy level En, , the quantum number j being conventionally used for the angular momentum), or perhaps to a composite state in which several observables have sharp values. One way of visualizing this state is as the actual physical particle in that state; that is particularly appropriate, for example, in the Stern-Gerlach experiment.

    The Stern-Gerlach apparatus, shown in figure 1.1, is a molecular beam in which particles travel from a source, through various inhomogeneous magnetic fields, to a detector that records the presence or absence of molecules at a particular position in space. Let us suppose we have chosen a beam source such that the particles coming out all have exactly one unit of angular momentum (a ¹P or ³S atom, for example; see chapter 2.6). If this beam passes through a magnet shaped to have a field gradient in a particular direction—say the z direction—then we know that the beam will split up into three subbeams, each moving in a slighty different direction. Each of these beams is composed of a pure state, |jjz〉, having a definite value of the total angular momentum j and the projection of the angular momentum in the z direction, jz. Since all the particles entering the magnetic field must leave it in one of the three beams, we have an example of a completeness relation,

    (1.1)

    In the second part of figure 1.1, we have taken one of the beams—the |1 + 1z〉 state—and passed it through a magnet having a field gradient in the orthogonal x direction. The result of doing this, as we know, is that we again obtain three beams, this time spread out in a plane perpendicular to that in which the beams spread out after passing through the z magnet. This entire experimental situation can be described simply by letting the x magnet be represented by an operator Jx, and saying that a state with a quantized component of angular momentum in the z direction is not an eigenstate of the operator Jx, since an eigenstate of a particular operator is not split into further substates by application of that operator.

    Now suppose we take one of the states that have passed through the x magnet, after having passed through the z magnet—say, the |1 – 1x〉 state obtained by operating Jx on |1 + 1z〉. The result of a second Jz operation is again to produce three beams. In other words, the application of the Jx operator has destroyed the previous quantization of Jz; we say that Jx and Jz are not simultaneously measurable observables.

    Since each of the subbeams has one-third the intensity of the parent beam, the effect of operating Jx and Jz on |11z〉 is to produce three beams dispersed in the z direction, each with one-ninth the initial intensity. Suppose the order had been reversed, and Jz had been applied first, followed by Jx. Since |11z〉 is an eigenstate of Jz, a single beam would have emerged from the z magnet this time; application of Jx to this beam would have produced three beams dispersed in the x direction, each with one-third the initial intensity. Clearly, the results of the two possible orders of applying these operators are inequivalent, so we can write

    or, since this result is true for any initial choice of state,

    (1.2)

    where the expression [Jx, Jz] is the commutator of Jx and Jz. Saying that two operators do not commute is equivalent to saying that the physical quantities they represent cannot be determined simultaneously in the same particle.

    2 Wave Function and Matrix Representations

    A connection with the forms of quantum mechanics with which we may be more familiar can be obtained by considering the wave function ψnn〉. In measurement algebra notation, we write it as ψn(r) = 〈r n〉. The square of the magnitude of the wave function (which may be a complex number) is a function of r, which gives the probability density for finding a particle in state n at the position r:

    The integral of Pn(r) over all space is, of course, unity:

    In general, a combination of the form 〈m n〉 is a number (called the projection or transformation matrix element), the square of which gives the probability that if the system is in state n, it is also in state m. When m is a set of coordinates, r, this is just the ordinary coordinate space wave function. If m and n are two eigenstates of the same operator, then 〈m n〉 = δmn (= 0 if m n, and = 1 if m = n) ; if a system has just been measured as being in some particular state, then it is in that state, and no other.

    A problem, which is often encountered, is that of not knowing the set of eigenstates for a particular physical situation, although knowing the set for a closely related, usually simpler one. Fortunately, there exist mathematical procedures for solving this problem. Any arbitrary state (or function) can always be expanded in terms of a suitable complete set of states (or functions); a familiar example is Fourier’s series:

    or, in integral form,

    Such an expansion can be carried out with any set, so long as it is complete. We may use, for example, Hermite polynomials (as occur in harmonic oscillator wave functions), spherical harmonics (rigid rotor wave functions), or the hydrogenlike orbitals for a particle in a Coulomb field equally as well as sines and cosines, which themselves happen to be particle-in-a-box wave functions. Suppose we do this and express our desired exact wave function for some system as

    (1.3)

    If we write this familiar expression in terms of measurement algebra, it becomes

    where the transformation number 〈k°|n〉 is seen to be just the expansion coefficient. A convenient formal abbreviation immediately suggests itself, namely,

    (1.4)

    k〉 〈k are known as projection operators. Equation (1.4) is a more general form of the completeness relation, an example of which we have seen. The corresponding expression for a continuous variable, such as position r or momentum p, would be just

    We can use this last form to show the equivalence of the Schrödinger and Heisenberg representations of quantum mechanics. The matrix element of an operator A is defined as

    where Aop represents a mathematical operation on the Schrödinger wave function ψn (multiplication by a constant, differentiation, and so on). Using the state vector notation gives

    Since this is true for any continuous variable, not only r, we see that the matrix elements of the Heisenberg representation are indeed independent of any coordinate system. From here on, we shall use the general form 〈m A n〉 for a matrix element.

    Figure 1.2

    Definition of spherical polar coordinates, r, θ, and φ.

    In order to make the general principles more concrete, let us consider once more the example we used in section 1, namely, the angular momentum in a j = 1 system. The operator for angular momentum is obtained from the classical r x p form by substituting p /i)∇; in the spherical (r, θ, φ) coordinate system shown in figure 1.2, this becomes

    Consider the wave function

    which is just one of the spherical harmonic functions. When is operated on this function, we find by explicit algebraic substitution that

    J²Y11(θ, φ²Y11(θ, φ)

    and, similarly,

    Jz Y11 (θ, φ) = Y11(θ, φ)

    Y11 is, thus, an eigenfunction of these two operators, and the coefficient that appears when the indicated operation is performed is the eigenvalue of the operator for that eigenfunction.

    If we try to carry out the operation JxY11(θ, φ), however, we shall find that it cannot be reduced to a form (constants Y11 (θ, φ) What we obtain, in fact, are three terms, which turn out to be Y11, Y10, and Y1-1, corresponding to the three physical beams produced in figure 1.1. An explicit verification of the commutation rule, equation (1.2), can be carried out in the same way.

    If we evaluate .the matrix element 〈11z|Jx|11z, though, is not equal to zero, so that

    (1.5)

    we say that there is a dispersion in the operator spectrum. More commonly, one says that Jx is not a good quantum number in the |11z〉 representation. This statement implies the dispersion of equation (1.5) and the noncommutation of equation (1.2). We shall see, later on, that the key step in interpreting complex spectra is to establish what the good quantum numbers are for the system we are examining.

    3 Basis Functions for the Energy

    The quantum mechanical operator with which we shall be most concerned in molecular spectroscopy is the Hamiltonian operator, of which the eigenvalue is the energy:

    (1.6)

    which is the Schrödinger equation. Experimental spectroscopy makes use of atoms and molecules themselves as a sort of miniature analog computer to solve this equation for the En ; the theoretical interpretation consists essentially of finding the basis in which we can most easily effect the same solution, that is, in which the Hamiltonian operator is most nearly diagonal. It is possible to say the same thing in a variety of ways: That the Hamiltonian is diagonal, that we have found the good quantum numbers for the system, that certain dynamical variables are constants of the motion, or that the operators for these dynamical variables all commute with one another are all equivalent statements.

    There are a few simple systems for which the differential equation represented by equation (1.6) can be solved by elementary methods. These systems will be familiar to anyone who has taken an elementary course in quantum chemistry, and so we shall simply tabulate the results of their solution (see table 1.1). Of these four systems, the first is not of much interest to the spectroscopy of bound systems. The last three, however, form the framework within which nearly all atomic and molecular problems are discussed.³ Only a few simple systems are actually describable by these elementary solutions; more often, we have a system that is close to being so described, but is not exactly equivalent. In such cases, we make use of perturbation theory, which is described in the following section.

    4 Perturbation Theory of Energy Corrections

    Suppose that we are interested in a system for which the Hamiltonian is given by

    (1.7)

    ⁰ is one of the elementary systems listed in table 1.1 and λ is the additional part that makes our particular system different from these and harder to solve. The multiplicative factor λ is included as a mathematical convenience, in order to keep track of orders of magnitude during the derivation to follow. We shall seek an expansion for the eigenstates and energy eigenvalues as follows:

    (1.8)

    (1.9)

    Rewriting the Schrödinger equation (1.6) and keeping together all terms in the same power of λ, we have

    Table 1.1a

    (1.10)

    The first term, to zero order in λ, ⁰ and is equal to zero. We shall also set terms smaller than λ in the Hamiltonian operator itself equal to zero. Then, to find the first-order corrections to the states and energies, we expand the state correct to first order in terms of the zero-order basis set; thus,

    , operating on this state, yields

    Collecting all the terms to the first power in λ gives

    (1.11)

    , we form the matrix element of the preceding equation by premultiplying both sides by the complex conjugate state 〈n°| ; and integrating over all space. The result of doing this is

    or, using the fact that both |n⁰〉 and |k⁰〉 are part of the same orthonormal set,

    But the left-hand side clearly equals zero, for the only term in the sum for which δnk ≠ 0 is when n = k, . This gives us the simple result

    (1.12)

    In other words, the first-order correction to the energy, is simply the diagonal matrix element of the perturbing term in the Hamiltonian, in the zero-order basis set.

    In many cases, the first-order correction is not sufficiently accurate, and we have to go one step further and find the second-order correction. In order to do this, we shall need to know the wave functions to first-order.⁴ We can find them by taking an off-diagonal matrix element of equation (1.11):

    This simplifies to

    This gives us an expression for 〈l|n(1)〉, which, when substituted into the definition of |n(1)〉, namely,

    gives

    . The terms to zero and first orders in λ have already been made to equal zero; so we simply collect the terms to second order in λ, giving

    (1.13)

    We let

    and we shall need to evaluate

    and

    Making all these substitutions into equation (1.13) gives

    If we now form the matrix elements of this complicated equation by premultiplying by 〈n|, and make liberal use of the orthonormality of the basis set, we obtain as a result

    (1.14)

    Equation (1.14) can be interpreted in terms of a mixing of certain characteristics of other members of the basis states into the perturbed states, with the strength of mixing inversely proportional to the difference in energy between the two states. Such interpretations can easily be overdone, though; equation (1.14) really just reflects the fact that we have used the basis states for the unperturbed system as a convenient complete set for expansion of the perturbed states.

    There may be several distinct states with the same energy eigenvalue in the unperturbed system we are considering; these are termed degenerate states. The effect of a perturbation in such a case is often to remove the degeneracy, or, as it is often called, split the levels. The origin of the term splitting may be seen from the effects of electric and magnetic fields on actual spectroscopic lines (see figures 2.14 and 2.18). A different, somewhat more elaborate mathematical procedure is called for in this case.

    , so that

    The first effect of the perturbation will be to produce new linear combinations of these basis states, given by

    are known as the correct zero-order states. The states in the presence of the perturbation can, as usual, be expanded in a power series in λ,

    but, as before, we need to know only the correct zero-order states to determine the energy to first order. Substituting into equation (1.10) and picking out the terms to first order in λ give

    If we let

    we have

    so that

    We again form the matrix element of both sides of the equation by premultiplying by 〈nj| and using orthonormality; this time, we get a set of α simultaneous linear equations,

    for j = 1, . . . , α. The method for the solution of such a set of simultaneous equations requires that the determinant of the coefficients be equal to zero, or

    (1.15)

    The left-hand side of equation (1.15) is known as a secular determinant for a particular n. We have used the additional orthonormality relation 〈nj|ni〉 = δij in constructing the determinant. The solution to equation (1.15) is found, first of all, as the α algebraic roots of the equation, which gives us up to α different values for E(1). (Some of the roots may be the same numerically —it is conceivable that a particular perturbation may not remove all the degeneracy present in a system.) This operation is equivalent to finding a transformation of the basis set that makes the Hamiltonian matrix diagonal.

    A simple example of the foregoing involves an initially doubly degenerate level, for which the secular determinantal equation is particularly easy to solve. The correct zero-order states are just

    and the energies of the states, to first order, are

    5 Time-Dependent Perturbations: Absorption and Emission of Electromagnetic Radiation

    Thus far, we have been considering stationary states of atoms and molecules that are generated by time-independent Hamiltonian operators. What spectroscopy is primarily concerned with, however, is transitions between different energy levels; so we must turn our attention to those time-varying operators that can induce transitions.

    The Schrödinger equation including the time is just

    ⁰ is time independent, as we have been considering it until this point, it is easy to show that if the time-dependent state has the form

    then we regain the time-independent equation we have been using,

    Suppose we now add a perturbation which is time dependent, so that

    (1.16)

    ’(t) = 0, with the appropriate energy phase factor associated with each term, that is,

    If we substitute this into equation (1.16), we obtain

    Because |n〉 represents a time-independent state, the terms involving ∂|n〉/∂t in this equation are all zero. This gives the simplified equation

    Premultiplying both sides of this equation by the complex conjugate state vector 〈m| gives

    We make use of the orthonormality of the basis functions to eliminate all the terms on the right-hand side of the above equation except the one in which m = n. This leaves us with the following set of equations for the time development of the coefficients cm(t):

    gives

    (1.17)

    By defining (En — Em)/ = ωnm and 〈m| ’(t)|n〉 = Vmn, this becomes

    (1.18)

    It should be emphasized that (1.17) or (1.18) is an exact result; no approximations have yet been introduced.

    The first-order perturbation result describes transitions between two isolated levels under the influence of a weak time-dependent perturbation connecting the levels. To obtain this result, we let all the cn(t) be zero except for one, say, ci(t) ≡ a(t), which initially has the value

    ci(0) = a(0) = 1.

    Then the value of the amplitude of any other state m is given by

    If we further assume that the interaction is sufficiently weak so that cm(t) never becomes large, we can replace a(t’) by 1; then

    We now need to specify Vma(t). Let us assume that the perturbation is an oscillating or harmonic function such as an electromagnetic field (see following section), so that

    This gives

    (1.19)

    In a typical physical situation, there will be a whole set of transitions from state a to other states m, with a frequency ωam = (Ea — Emassociated with each transition. The resonant denominator in (1.19) picks out the transition having ωam ≈ ω; the amplitudes for all the other states, for which this condition is not met, will remain close to zero. Let us call this near coincidence the resonant transition, with amplitude cb(t) ≡ b(t). Then, since ω ≈ ωab, the first antiresonant term in (1.19) can be neglected in comparison with the second term, since it will be rapidly oscillating and thus will average to zero over any significant time interval. We shall encounter this removal of the antiresonant term later, as the rotating wave approximation (RWA); it is equivalent to replacing an oscillating perturbation V⁰ cos ωt with a rotating perturbation Vei t. We can then write, in the RWA,

    The probability of a transition from state |a〉 to state |b〉, given by the square of this amplitude, is then

    (1.20)

    For further discussion, the reader is referred to any of several standard texts on quantum mechanics (reference 4).

    = ω ωab = 0), then (1.20) reduces to

    that is, for short time and for weak coupling between the molecule and the radiation field. Ordinarily, the oscillating frequency dependence of (1.20) is not seen, since the interaction must be averaged over some distribution of frequencies g):

    The rate of transitions from state |a〉 to state |b〉, which determines the optical absorption coefficient, is then

    (1.21)

    In other words, for transitions to occur, there must be a nonvanishing spectral density at the resonant frequency ωab; this is just a restatement of the Bohr-Einstein frequency rule introduced at the beginning of this chapter. The frequency dependence can be displayed, however, under the proper conditions, such as in a molecular beam experiment. (See chapter 10.4 for a discussion of this method.) In this experiment, a beam of molecules, all at a preselected velocity, was sent through a radio frequency field of finite spatial extent. The measurement made was of what fraction of the molecules had undergone a transition at ωab after passing through this field; clearly, all the molecules were subjected to the influence of the field for the same time

    The frequency distribution g( ) was essentially monochromatic, δ0 was continuously varied. The resulting spectrum, shown in .

    The questions that remain to be addressed at this point can be stated as follows:

    , and what does this form imply about which molecular states can couple with each other?

    2. What is the relation of (1.20) and (1.21) to bulk optical properties of materials, such as the absorption coefficients?

    3. How can we deal with situations in which the perturbation approximation breaks down as the strength of the coupling becomes large?

    Figure 1.3

    Frequency-dependent transition probability for a molecular beam resonance experiment: ———, experimental curve for HCN molecule, J = 1, MJ, = 1, MF = 0; . . . , calculated [sin²{( ab)t/2}/( ab)²] behavior for molecular velocity ν0 = (8 x 10⁴ cm sec-1) ± 10%. [From T. R. Dyke, G. R. Tomasevich, W. Klemperer, and W. Falconer, J. Chem. Phys. 57, 2277 (1972). Reproduced with permission.]

    The first two of these questions will be dealt with in the following sections; discussion of the last point will be deferred until the Rabi solution for two-level systems is taken up in chapter 11.

    6 The Electric Dipole Interaction

    (r, t k·r), ¹(t) is simply

    (1.22)

    where the sum over k includes the x, y, and z components of E0, and the sum over j includes all the charges (nuclei and electrons) in the molecule. Equation (1.22) simply expresses the interaction of the electric field with the dipole moment operator. This treatment will not be adequate for our purposes, however, because we whall want to consider higher-order interactions.

    The more complete treatment of these processes begins with obtaining the Hamiltonian operator in the correct relativistically invariant form, in which all coordinate systems moving at constant velocity are equivalent. This is necessary because we are dealing with electromagnetic radiation, which is a relativistic phenomenon.⁵ The conventional Hamiltonian has kinetic and potential energy terms of the general form

    with the operator for the momentum p /i) ∇. The relativistically invariant form is

    where A is the vector potential and φ is the scalar potential associated with the electromagnetic field. Writing out the square of the momentum in detail, we obtain the form of the Schrödinger equation as

    The time dependence of the radiation field is now that of the vector potential :

    (1.23)

    The electric field is given by

    (1.24)

    and the associated magnetic field is

    (1.25)

    always oriented perpendicularly to the electric field.

    We are always free to choose a gauge (reference 6) that makes the problem more tractable mathematically; in this case, the best choice is the Coulomb gauge, which provides that ∇ · A = 0 and the scalar

    Enjoying the preview?
    Page 1 of 1