Sei sulla pagina 1di 10

1. Entropy:Definition: Entropy is the quantitative measure of disorder in a system.

The concept comes out of thermodynamics, which deals with the transfer of heat energy within a system. Instead of talking about some form of "absolute entropy," physicists generally talk about the change in entropy that takes place in a specific thermodynamic process. 2.Third law of thermodynamics The Third Law of Thermodynamics refers to a state known as "absolute zero." This is the bottom point on the Kelvin temperature scale. The Kelvin scale is absolute, meaning 0 Kelvin is mathematically the lowest possible temperature in the universe. This corresponds to about -273.15 Celsius, or -459.7 Fahrenheit. 3.Free Energy Definition: Gibbs free energy is a thermodynamic property that was defined in 1876 by Josiah Willard Gibbs to predict whether a process will occur spontaneously at constant temperature and pressure. Gibbs free energy G is defined as G = H - TS where H, T and S are the enthalpy, temperature, and entropy.

Changes in the Gibbs free energy G correspond to changes in free energy for processes at constant temperature and pressure. The change in Gibbs free energy change is the maximum nonexpansion work obtainable under these conditions. G is negative for spontaneous processes, positive for nonspontaneous processes and zero for processes at equilibrium.

5.Law of Thermodynamics Energy exists in many forms, such as heat, light, chemical energy, and electrical energy. Energy is the ability to bring about change or to do work. Thermodynamics is the study of energy.

First Law of Thermodynamics: Energy can be changed from one form to another, but it cannot be created or destroyed. The total amount of energy and matter in the Universe remains constant, merely changing from one form to another. The First Law of Thermodynamics (Conservation) states that energy is always conserved, it cannot be created or destroyed. In essence, energy can be converted from one form into another. Click here for another page (developed by Dr. John Pratte, Clayton State Univ., GA) covering thermodynamics.

The Second Law of Thermodynamics states that "in all energy exchanges, if no energy enters or leaves the system, the potential energy of the state will always be less than that of the initial state." This is also commonly referred to as entropy. A watchspring-driven watch will run until the potential energy in the spring is converted, and not again until energy is reapplied to the spring to rewind it. A car that has run out of gas will not run again until you walk 10 miles to a gas station and refuel the car. Once the potential energy locked in carbohydrates is converted into kinetic energy (energy in use or motion), the organism will get no more until energy is input again. In the process of energy transfer, some energy will dissipate as heat. Entropy is a measure of disorder: cells are NOT disordered and so have low entropy. The flow of energy maintains order and life. Entropy wins when organisms cease to take in energy and die.

6.Langrange;s equation

Lagrange's equations are fundamental relations in Lagrangian mechanics given by (1)

where is a generalized coordinate, is the generalized work, and T is the kinetic energy. This leads to (2)

where L is the Lagrangian, which is called the Euler-Lagrange differential equation. Lagrange's equations can also be expressed in Nielsen's form. 7.Heat capacity The heat capacity C of a substance is the amount of heat required to change its temperature by one degree, and has units of energy per degree. The heat capacity

is therefore an extensive variable since a large quantity of matter will have a proportionally large heat capacity. A more useful quantity is the specific heat (also called specific heat capacity), which is the amount of heat required to change the temperature of one unit of mass of a substance by one degree. Specific heat is therefore an intensive variable and has units of energy per mass per degree. 8.Density operator The density operator is a generalization of the wavefunction to include the possibility of uncertainty in its preparation. If we know only that the system is described by an ensemble of quantum states with probabilities then the appropriate density operator is

9. Liouville theorem

iouvilles theorem is one of the most fundamental results in theoretical physics. It is worth clearly understanding what it is telling us.

Lets start by considering a specific system, and see what the laws of physics (including Liouvilles theorem) have to say about it. We choose the simple case of particles moving in circular orbits in a gravitational field. Here are some technical details: The force is proportional to 1/r2 and is directed toward the origin. The particles do not interact with each other. Each particle has a position-vector r and a momentum-vector p. The components of r in Cartesian coordinates are rx and ry. The components of r in polar coordinates are and .

10. Coriolis force Coriolis effect is a deflection of moving objects when they are viewed in a rotating reference frame. In a reference frame with clockwise rotation, the deflection is to

the left of the motion of the object; in one with counter-clockwise rotation, the deflection is to the right.

PART B

11. (a) D AlembertS Principal D'Alembert's principle, also known as the Lagranged'Alembert principle, is a statement of the fundamental classical laws of motion. It is named after its discoverer, the French physicist and mathematician Jean le Rond d'Alembert. The principle states that the sum of the differences between the forces acting on a system and the time derivatives of the momenta of the system itself along any virtual displacement consistent with the constraints of the system, is zero. Thus, in symbols d'Alembert's principle is written as following,

12(a)Hamilton;s Equation For a closed system, it is the sum of the kinetic and potential energy in the system. There is a set of differential equations known as the Hamilton equations which give the time evolution of the system. Hamiltonians can be used to describe such simple systems as a bouncing ball, a pendulum or an oscillating spring in which energy changes from kinetic to potential and back again over time. Hamiltonians can also be employed to model the energy of other more complex dynamic systems such as planetary orbits in celestial mechanics and also in quantum mechanics.[2]

The Hamilton equations are generally written as follows:

In the above equations, the dot denotes the ordinary derivative with respect to time of the functions p = p(t) (called generalized momenta) and q = q(t) (called generalized coordinates), taking values in some vector space, and = is the socalled Hamiltonian, or (scalar valued) Hamiltonian function. Thus, more explicitly, one can equivalently write

and specify the domain of values in which the parameter t (time) varies.

13 Eulers Equation Euler's equations describe the rotation of a rigid body in a frame of reference fixed in the rotating body and having its axes parallel to the body's principal axes of inertia.

14 Triatomic molecules Triatomic molecules are formed by three atoms. Examples include H2O, CO2, HCN etc. A symmetric linear molecule ABA can perform: Antisymmetric longitudinal vibrations with frequency

Symmetric longitudinal vibrations with frequency

Symmetric transversal vibrations with freq

In the previous formulas, M is the total mass of the molecule, mA and mB are the masses of the elements A and B, k1 and k2 are the spring constants of the molecule along its axis and perpendicular to it. In the previous formulas, M is the total mass of the molecule, mA and mB are the masses of the elements A and B, k1 and k2 are the spring constants of the molecule along its axis and perpendicular to it. 15 Partition function Partition functions describe the statistical properties of a system in thermodynamic equilibrium. It is a function of temperature and other parameters, such as the volume enclosing a gas. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives.

There are actually several different types of partition functions, each corresponding to different types of statistical ensemble (or, equivalently, different types of free energy.) The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential

PART C 16 Fermi Distribution The Fermi-Dirac distribution applies to fermions, particles with half-integer spin which must obey the Pauli exclusion principle. Each type of distribution function has a normalization term multiplying the exponential in the denominator which may be temperature dependent. For the Fermi-Dirac case, that term is usually written:

The significance of the Fermi energy is most clearly seem by setting T=0. At absolute zero, the probability is =1 for energies less than the Fermi energy and zero for energies greater than the Fermi energy. We picture all the levels up to the Fermi energy as filled, but no particle has a greater energy. This is entirely consistent with the Pauli exclusion principle where each quantum state can have one but only one particle.

The significance of the Fermi energy is most clearly seem by setting T=0. At absolute zero, the probability is =1 for energies less than the Fermi energy and zero for energies greater than the Fermi energy. We picture all the levels up to the Fermi energy as filled, but no particle has a greater energy. This is entirely consistent with the Pauli exclusion principle where each quantum state can have one but only one particle.

17. Micro Canonical theorem onsider a system with Hamiltonian . Let and be specific components of the phase space vector. The classical virial theorem states that

where the average is taken with respect to a microcanonical ensemble. To prove the theorem, start with the definition of the average:

where the fact that has been used. Also, the and dependence of the partition function have been suppressed. Note that the above average can be written as

However, writing

allows the average to be expressed as

The first integral in the brackets is obtained by integrating the total derivative with respect to over the phase space variable . This leaves an integral that must be

performed over all other variables at the boundary of phase space where , as indicated by the surface element . But the integrand involves the factor , so this integral will vanish. This leaves:

where is the partition function of the uniform ensemble. Recalling that

which proves the theorem.

Potrebbero piacerti anche