Sei sulla pagina 1di 25

CHM4M4 Simulation : Lecture Note

Koji Ando
Last updated : March 19, 2003
Contents
1 Molecular Hamiltonian 3
2 Adiabatic Approximation (Born-Oppenheimer) 3
3 Non-adiabatic couplings

4
3.1 Mixed Quantum-classical Simulation . . . . . . . . . . . . . . . . . . . . . 5
4 Electronic part (Quantum chemistry) 6
4.1 Hartree-Fock Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.2 Electron Correlation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2.1 Static and Dynamic Correlations . . . . . . . . . . . . . . . . . . . . 9
4.2.2 Various Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3 Density Functional Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3.1 Kohn-Sham theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3.2 Hybrid Hartree-Fock / DFT . . . . . . . . . . . . . . . . . . . . . . . 9
4.4 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5 Potential Energy Surfaces 10
5.1 Empirical Force-Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 Hybrid Methods

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.2.1 TDSCF MD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.2.2 Born-Oppenheimer MD . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.2.3 Hartree-Fock BO MD . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1
5.2.4 Car-Parrinello MD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
6 Molecular Dynamics Simulation 13
6.1 Summary of Classical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.2 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.2.1 Verlet algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.2.2 Leap-frog algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.2.3 Velocity-Verlet algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.3 Treatment of Molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.3.1 Multiple time step method (r-RESPA)

. . . . . . . . . . . . . . . . . 16
6.4 Constant Temperature and Pressure Methods . . . . . . . . . . . . . . . . . 18
6.4.1 Common Statistical Ensembles . . . . . . . . . . . . . . . . . . . . . 18
6.4.2 Temperature and Pressure from MD . . . . . . . . . . . . . . . . . . 18
6.4.3 Constant NVT MD

. . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4.4 Constant Pressure MD

. . . . . . . . . . . . . . . . . . . . . . . . . 19
7 Data Analysis 19
7.1 Static Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.2 Radial disribution function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.3 Time correlation function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7.3.1 Relaxation Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.3.2 Various Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8 Monte Carlo Simulation 23
8.1 Standard MC (Metropolis algorithm) . . . . . . . . . . . . . . . . . . . . . . 23
8.2 Umbrella Sampling Technique . . . . . . . . . . . . . . . . . . . . . . . . . . 24
9 Free Energy Surfaces 25
( = advanced topics)
2
1 Molecular Hamiltonian
Molecules = electrons + nuclei
H = T
N
+T
e
+V
eN
+V
ee
+V
NN
e.g., T
N
=

I
h
2
2M
I

2
I
, V
eN
=

i
Z
I
[R
I
r
i
[
Ideally, we wish to solve the whole problem,
H(R, r) = E(R, r)
But this is too dicult for most of chemically interesting (complex) systems.
useful to separate the electronic and nuclear problems.
Electronc Hamiltonian
H
e
= H T
N
= T
e
+V
eN
+V
ee
+V
NN
2 Adiabatic Approximation (Born-Oppenheimer)
Stage 1 : Fix R and solve the electronic problem
H
e

n
(r; R) = W
n
(R)
n
(r; R)

n
(r; R) electronic wavefunction (parametrically dependent on R)
W
n
(R) electronic energy levels at R
Stage 2 : Repeat at various R
Potential energy surface W
n
(R)
Diagram : Potential energy curves W
1
and W
2
(diatomic)
3
Stages 1-2 = Quantum chemistry (narrow meaning)
Stage 3 : Examine nuclear dynamics on PES W
n
(R)
quantum energy levels :
H
(N)
n
T
N
+W
n
(R) (nuclear Hamiltonian on n-th PES)
H
(N)
n

v
(R) = E
n,v

v
(R)
quantum dynamics : wavepacket simulations, i h

t

n
(R) = H
(N)
n

v
(R)
classical dynamics : moelcular dynamics (MD) simulations, M
I

R =
W
n
(R)
R
statistical mechanics : Monte Carlo simulations (quantum or classical)
Stage 4 : Analysis of simulation results (statistical or dynamical)
3 Non-adiabatic couplings

Expand the total wavefunction in terms of the electronic wavefunction


n
(r; R)
(r, R) =

n
(R)
n
(r; R) (1)
Schrodinger equation with the total Hamiltonian: H = T
N
+H
e
[T
N
+H
e
](r, R) = E(r, R) (2)
Put (1) into (2), multiply

k
and integrate over the electronic coordinate (i.e.,
k
[ )
Use H
e

n
= W
n

n
and
k
[
n
) =
kn
,

k
(r; R)[T
N
[
n
(r; R))
n
(R) +W
k
(R)
k
(R) = E
k
(R) (3)
Noting that T
N
=

I
( h
2
/2M
I
)
2
I
operates to both (r; R) and (R),

n
_

k
[T
N
[
n
)

I
h
2
M
I

k
[
I
[
n
)
I
_

n
(R)
+T
N

k
(R) +W
k
(R)
k
(R) = E
k
(R)
(4)
[Here, T
N
and
I
within [ [) do not operate to the further right]
Adiabatic approximation: Neglect the 1st line of (4)
[T
N
+W
k
(R)]
k
(R) = E
k
(R) (= i h

k
t
) (5)
4
Schrodinger eq for the nuclear wfn (R) on a (single adiabatic) PES W
k
(R)
neglected terms = Non-adiabatic couplings
induce mixing/transition among dierent electronic states k n
1st order NA coupling :
h
2
M
I

k
[
I
[
n
)
2nd order NA coupling :
h
2
2M
I

k
[
2
I
[
n
)
3.1 Mixed Quantum-classical Simulation
Assume: Nuclear motions follow classical trajectories R(t)
(r, R, t)

n
c
n
(t)
n
(r; R(t)) (6)
[c
n
(t)[
2
= probability of nding the system in the n-th electronic state.
How to determine R(t) ? not a trivial task
Most conveniently, classical trajectories on the adiabatic PES W
n
(R)
Then, switch among PESs via non-adiabatic transitions (surface-hopping)
Diagram : Curve crossing
Put (6) into the time-dependent Schrodinger eq : i h

t
= [T
N
+H
e
]
i h

n
(
c
n
t

n
+c
n

n
t
) =

n
c
n
[T
N
+W
n
(R)]
n
5
Multiply

k
and integrate over the electronic coordinates (i.e.,

k
[ )
Note the orthonormality
k
[
n
) =
kn
i h
c
k
t
+i h

n
c
n
_

k
[

n
t
_
= W
k
(R)c
k
+

n
c
n

k
[T
N
[
n
)
Neglect the 2nd order NA couplings
k
[T
N
[
n
)
Use
_

k
[

n
(r; R(t))
_
=
k
[
R
[
n
)

R(t) d
kn
(R)

R(t)
i h c
k
(t) = W
k
(R)c
k
(t) i h

n
d
kn
(R)

R(t)c
n
(t)
(coupled equations for the probability amplitudes c
n
(t))
4 Electronic part (Quantum chemistry)
4.1 Hartree-Fock Method
Electronic Hamiltonian
H
e
= T
e
+V
eN
+V
ee
+V
NN
=

i
_

h
2
2m

2
i

I
Z
I
R
Ii
_
+

i<j
1
r
ij
+

I<J
Z
I
Z
J
R
IJ
=

i
H
core
(i) +V
ee
+V
NN
(7)
V
NN
is just a constant in the electronic problem (adiabatic approximation), so we hereafter
consider h
e
H
e
V
NN
Two-electron molecules (e.g., H
2
)
h
e
= H
core
(1) +H
core
(2) +
1
r
12
Slater determinant : (1, 2) =
1

1
(1)
1
(2)

2
(1)
2
(2)

=
1

2
(
1
(1)
2
(2)
2
(1)
1
(2)) =
1

2
[
1

2
[
Spin-orbitals :

1
(1) =
1
(1)(1) =
1
(1)

2
(1) =
1
(1)(1) =

1
(1)
(space orbital) (spin function , )
Diagram : orbital diagrams
1

1
,
1

2
,
1

2
6
Slater-determinants satisfy the Pauli-Principle of many-electron systems:
Anti-symmetry : (2, 1) = (1, 2)
Exclusion principle : (1, 2) = 0 if
1
=
2
(space and spin)
Energy : E = (1, 2)[h
e
[(1, 2)) =
_ _
(1, 2)

h
e
(1, 2)d
1
d
2
=
1
2
(
1
(1)

1
(2)[h
e
[
1
(1)

1
(2))
1

1
[h
e
[

1
)

1
[h
e
[
1

1
) +

1
[h
e
[

1
))
The 1st term =
1
(1)

1
(2)[H
core
(1) +H
core
(2) +
1
r
12
[
1
(1)

1
(2))
=
1
(1)[H
core
(1)[
1
(1))

1
(2)[

1
(2)) +
1
(1)[
1
(1))

1
(2)[H
core
(2)[

1
(2)) +
1

1
[
1
r
12
[
1

1
)
2H
core
11
+J
11
The 4th term gives the same result : 2H
core
11
+J
11
The 2nd term =
1
[H
core
[

1
)

1
[
1
) +
1
[

1
)

1
[H
core
[
1
) +
1

1
[
1
r
12
[

1
)
= 0 because [) = 0
The 3rd term is also zero.
Thus, E = 2H
core
11
+J
11
Let us next consider an open-shell singlet conguration [
1

2
[ ( electron in
1
and in
2
)
In the similar way, we get
E = H
core
11
+H
core
22
+J
12
where J
12

1

2
[
1
r
12
[
1

2
) 12[[12) = J
21
=
2

1
[
1
r
12
[
2

1
) 21[[21)
Triplet conguration [
1

2
[ ( electrons in both
1
and
2
) gives
E = H
core
11
+H
core
22
+J
12
K
12
where K
12
12[[21) = K
21
J
ij
and K
ij
are called Coulomb and Exchange integrals, respectively.
These examples suggest the following rule to write down energies of electron congura-
tions
Each electron in orbital
i
contributes H
core
ii
7
Each electron pair in orbitals
i
and
j
contributes J
ij
(regardless of the spin)
Each electron pair of the same spin in orbitals
i
and
j
contributes K
ij
Exercise : Write down the energy expression for the following electron congurations.
1. and electron pair in
1
and an electron in
2
(open-shell doublet)
2. paired ( and ) electrons in both
1
and
2
(closed-shell)
Answer :
1. E = 2H
core
11
+H
core
22
+J
11
+ 2J
12
K
12
2. E = 2H
core
11
+ 2H
core
22
+J
11
+J
22
+ 4J
12
2K
12
In general, for closed-shell N-electron systems,
E =
N/2

i=1
H
core
ii
+
N/2

i=1
J
ii
+
N/2

i=1
N/2

j=i+1
(4J
ij
2K
ij
)
This can be simplied by noting J
ii
= K
ii
E =
N/2

i=1
H
core
ii
+
N/2

i=1
N/2

j=1
(2J
ij
K
ij
)
Hartree-Fock Method
= nd variationally best MOs
i
under the orthonormality condition
i
[
j
) =
ij
minimize L E +

j

ij
(
ij

i
[
j
)) w.r.t. the variation of MOs
i

i
+
i
(where
ij
is the Lagrange multiplier)
Hartree-Fock equation :

F
i

i
=

j

ij

j
where the Fock-operator is dened as

F
i


H
core
+
N/2

j=1
(2

J
j


K
j
) [for closed-shell systems]
The Coulomb and Exchange operators

J
j
and

K
j
are dened by

J
j
(1)
i
(1) =
j
(2)[
1
r
12
[
j
(2))
i
(1)

K
j
(1)
i
(1) =
j
(2)[
1
r
12
[
i
(2))
j
(1)
Comment : The bottleneck for the rst-learner would be the abstractness of the functional
variation of MOs
i

i
+
i
. In practical calculations, however, we employ LCAO-MO
expansion of the MOs
i
=

c
i

where is the atomic orbitals (AO), and optimize


8
the coecients c
i
. This reduces the problem to a (non-linear) matrix eigenvalue problem
which is much more handy for computer implementations. (Hartree-Fock-Roothaan-Hall
method)
Summary of Hartree-Fock method
Assumes a single Slater-determinant for the electronic wavefunction, which
satises the Pauli Principle of many-electron systems
and variationally optimizes the MOs by minimizing the exact energy ex-
pression for the Slater-determinant wavefunction under the orthonormality
condition of the MOs.
The electronic energy is expressed by one-electron integrals H
core
ii
and Coulomb
and Exchange two-electron integrals J
ij
and K
ij
.
4.2 Electron Correlation Problem
4.2.1 Static and Dynamic Correlations
4.2.2 Various Methods
CI, MPn, MCSCF, MRCI, CC, MRMP, etc.
4.3 Density Functional Theory
4.3.1 Kohn-Sham theory
4.3.2 Hybrid Hartree-Fock / DFT
4.4 Other Methods
Valence-Bond Method
non-orthogonal orbitals
chemically intuitive resonance structures
Semi-empirical MO Methods
Approximations: neglect of dierential overlaps, empirical parameters
CNDO, INDO, MINDO, PM3, AM1, etc. etc.
9
5 Potential Energy Surfaces
Functional tting
Choice of the functional form (physically adequate asymptotic behavior, symmetry
etc.)
Empirical parametrization using e.g. spectroscopic data
Ab initio parametrization using quantum chemical calculations
Dimensionality problem : when the system has f degrees of freedom (f = 3N 6
for non-linear N-atoms molecules), and if M data points are needed per degree of
freedom for the functional tting, the total number of data points required, M
f
, may
be prohibitively huge for realistic systems. (e.g., M 10 and N = 6 requires 10
12
points.)
On-the-y evaluation of the potential W
n
(R) and gradient W
n
/R
Ab initio MD, Car-Parrinello MD
Still computationally expensive, but becoming feasible along with the increase of the
computer power
5.1 Empirical Force-Field
Standard FF for MD simulations of biomolecules
W =

bonds
K
R
(R R
e
)
2
+

angles
K

(
e
)
2
+

torsions
V
n
[1 + cos(n )]
+

atoms(i<j)
Q
i
Q
j
r
ij
+

atoms(i<j)
4
ij
_
_
_

ij
r
ij
_
12

ij
r
ij
_
6
_
_
First three terms = bonding potential
Last two terms = non-bonding interaction (Electrostatic + Short-range repulsion)
Inadequate for bond-breaking and -forming processes
Lack of electronic polarization eects, charge-transfer interaction
10
5.2 Hybrid Methods

5.2.0 Classical Mechanics


M
I

R
I
(t) =

R
I
W
model
e
(R(t))
5.2.1 TDSCF MD
On-the-y evaluation of the local potential W(R) =
0
(r; R)[H
e
(r; R)[
0
(r; R))
e
Time-dependent propagation of the (ground state) electronic wavefunction
0
(r; R).
_

_
M
I

R
I
(t) =

R
I

0
[H
e
[
0
)
i h

0
t
= H
e

0
Simple model of
0
is often employed (rather than carrying out quant. chem. calculations),
e.g., basis function expansion
0
(t) =

i
c
i
(t)F(r; R)
5.2.2 Born-Oppenheimer MD
_

_
M
I

R
I
(t) =

R
I
min

0
[H
e
[
0
)
E
0

0
= H
e

0
Optimize the electronic wavefunction
0
at each nuclear conguration R (rather than prop-
agating it as in TDSCF MD).
5.2.3 Hartree-Fock BO MD
If we employ the Hartree-Fock wavefunction :
HF
0
= det
i

where
i
= HF orbitals (1-electron, orthonormal),
min

0
[H
e
[
0
) min
{
i
}

HF
0
[H
e
[
HF
0
)

i
|
j
=
ij
i.e., minimization within the
i
-space under the constraint
i
[
j
) =
ij
.
HF Lagrangian : L
HF
e
=
HF
0
[H
e
[
HF
0
)

i,j

ij
(
i
[
j
)
ij
)

ij
= Lagrange multipliers
Variational (stationary) condition :
L
HF
e

i
=
L
HF
e

i
= 0

F
i

i
=

ij

j
(HF equation)
11
Fock-operator :

F
i


H
core
+
N/2

j=1
(2

J
j


K
j
) [for closed-shell systems]
canonical (diagonal) form :

F
i

i
=
i

i
(
i
= orbital energy)
may also use Kohn-Sham (DFT)

F
KS
and KS orbitals
HF BO MD :
_

_
M
I

R
I
(t) =

R
I

HF
0
[H
e
[
HF
0
) (
HF
0
= det
i
)
0 =

F
i

i
+

ij

j
This set of equations can be derived from an Extended Lagrangian :
L
BO
=

I
1
2
M
I

R
2
I

HF
0
[H
e
[
HF
0
) +

i,j

ij
(
i
[
j
)
ij
)
by assuming that the Euler-Lagrange equation of the classical mechanics applies for both
the nuclear and electronic (orbital) degrees of freedom
Euler-Lagrange eq :
d
dt
L
BO
q

L
BO
q
= 0 for q = R
I
,
i
,

i
(Note : functional derivatives for
i
and

i
)
5.2.4 Car-Parrinello MD
Introduce : ctious mass and kinetic energy for electronic (orbital) degrees of freedom
Extended Lagrangian :
L
CP
=

I
1
2
M
I

R
2
I
+

i
1
2

i
[

i
[
2

HF
0
[H
e
[
HF
0
) + constraints
e.g., constraints = MO orthonormality =

i,j

ij
(
i
[
j
)
ij
)
Euler-Lagrange eq
_

_
M
I

R
I
(t) =

R
I

HF
0
[H
e
[
HF
0
) +

R
I
constraints

i
(t) =

HF
0
[H
e
[
HF
0
) +

i
constraints
Car-Parrinello HF MD
_

_
M
I

R
I
(t) =

R
I

HF
0
[H
e
[
HF
0
) (
HF
0
= det
i
)

i
(t) =

F
i

i
+

ij

j
( may also use Kohn-Sham Fock operator

F
KS
)
12
Electronic (orbital) degrees of freedom (El-DoF) are treated as dynamical variables
No (strict) minimization in the MO
i
-space
Deviate from BO-MD due to thermal uctuations of the El-DoF
The dynamics of the El-DoF must be kept cool (eg using constraints)
Diagram : CP-MD trajectory in the coordinate and orbital space
6 Molecular Dynamics Simulation
6.1 Summary of Classical Mechanics
Newtonian EOM
m x = F =
V (x)
x
Principle of Least Action
Action : I
_
t
2
t
1
L(x, x)dt
The classical trajectory x(t
1
) x(t
2
) minimizes the action I against small variation
x(t) (with xed ends x(t
1
) = x(t
2
) = 0).
I =
_
t
2
t
1
dt L =
_
t
2
t
1
dt
_
L
x
x +
L
x
x
_
=
_
t
2
t
1
dt
_
L
x

d
dt
_
L
x
__
x +
L
x
x

t
2
t
1
13
Stationary condition I = 0 for arbitrary x(t)
Euler-Lagrange eq :
L
x

d
dt
_
L
x
_
= 0
This is easily seen to give the Newtonian EOM for L = T V =
m
2
x
2
V (x)
Hamiltonian EOM : Momentum and Hamiltonian are dened by
p
L
x
and H p x L
For Cartesian coordinates, we get p = m x and H = p
2
/2m + V (x), and it is easy to
see that the following Hamiltonian EOM is equivalent to Newtonian EOM:
x =
H
p
and p =
H
x
Lagrange and Hamilton theories are more exible and convenient when dealing with
general coordinate systems other than the Cartesian.
6.2 Integration
6.2.1 Verlet algorithm
r(t +t) = r(t) + r(t)t +
1
2
r(t)t
2
+
= r(t) +v(t)t +
1
2
a(t)t
2
+
r(t t) =
r(t +t) = 2r(t) r(t t) +a(t)t
2
error is of order O(t
4
)
time-reversible
requires storage of the previous position r(t +t)
small term a(t)t
2
is added to a dierence of large terms 2r(t) r(t t)
numerical round-o imprecisions
velocities are unnecessary to evolve the trajectory, but needed when calculating the
kinetic energy
v(t) =
r(t +t) r(t t)
2t
error is of order O(t
3
)
small dierence is divided by the small timestep numerical imprecisions
14
6.2.2 Leap-frog algorithm
Propagate position and velocity
r(t +t) = r(t) +v(t +
t
2
)t
v(t +
t
2
) = v(t
t
2
) +a(t)t
mathematically equivalent to Verlet method (easily veried by eliminating the veloci-
ties)
velocities (& kinetic energy) at time t
v(t) =
v(t +
t
2
) +v(t
t
2
)
2
advantages over the original Verlet
less problematic on the numerical round-o due to taking dierences
explicit appearance of velocities
criticisms : treatment of velocities still not very satisfactory
6.2.3 Velocity-Verlet algorithm
r(t +t) = r(t) +v(t)t +
1
2
a(t)t
2
v(t +
t
2
) = v(t) +a(t)
t
2
v(t +t) = v(t +
t
2
) +a(t +t)
t
2
mathematically equivalent to the previous two
15
6.3 Treatment of Molecules
Time scales :
bond stretch < bend < torsion < collective motions < rotation < translation
Fixing bond lengths (to save CPU)
1. treat as a rigid body (small molecules such as H
2
O)
2. transform the EOM to internal coordinates (e.g., GF-matrix method)
3. introduce bond constraint condition to the Lagrangian
constrained EOM (SHAKE and RATTLE methods)
Multiple time step method(s)
small time step for fast motions
frequent update of short-range interactions
6.3.1 Multiple time step method (r-RESPA)

r-RESPA = reversible REference System Propagator Algorithm


Classical Liouville operator :
iL ..., H =
f

i=1
_
H
p
i

q
i

H
q
i

p
i
_
Poisson bracket : A, B =
f

i=1
_
A
q
i
B
p
i

A
p
i
B
q
i
_
Propagation of the phase-space point = q
i
(t), p
i
(t)
(t + t) = e
iLt
(t)
In Cartesian coordinate with H =

i
p
2
i
2mi
+V (x), p
i
= m
i
v
i
,
_
F
i
=
V
x
i
= force
_
iL =

i
_
v
i

x
i
+
F
i
m
i

v
i
_
Using e
c

y
f(y) = f(y +c), we nd :
e
vt

x
propagates x to x +vt
e
t
F
m

v
propagates v to v +
F
m
t
16
Trotter decomposition
e
i(L
1
+L
2
)t
= e
iL
1
t/2
e
iL
2
t
e
iL
1
t/2
+O(t
3
)
If we choose : iL
1
=
F
m

v
, iL
2
= v

x
(t + t) = e
t
2
F
m

v
e
tv

x
e
t
2
F
m

v
(t) +O(t
3
)
From right to left :
e
t
2
F
m

v
propagates v to v +
F
m
t
2
e
tv

x
propagates x to x +vt
e
t
2
F
m

v
propagates v to v +
F
m
t
2
(with updated force F(x) )
This is exactly the Velocity-Verlet algorithm
Decomposition of forces : (fast/slow, tight/soft, short/long-range etc.)
F = F
fast
+F
slow
Accordingly, iL
1
=
F
slow
m

v
iL
2
= v

x
+
F
fast
m

v
,
Then, the propagator will be
e
t
2
F
slow
m

v
e
t(v

x
+
F
fast
m

v
)
e
t
2
F
slow
m

v
We further decompose the propagator in the middle into n micro-steps with t t/n
e
t
2
F
slow
m

v
_
e
t
2
F
fast
m

v
e
tv

x
e
t
2
F
fast
m

v
_
n
e
t
2
F
slow
m

v
Implementation It might be easier to see the code :
t = t/n ! micro-timestep (for fast motions)
do istep = 1, nstep ! overall simulation steps
v = v + (t/2) (F
slow
/m)
do j = 1, n ! inner loop for fast motions
v = v + (t/2) (F
fast
/m)
x = x +t v
call calculate force(F
fast
)
v = v + (t/2) (F
fast
/m)
end do
call calculate force(F
slow
)
v = v + (t/2) (F
slow
/m)
end do
17
6.4 Constant Temperature and Pressure Methods
6.4.1 Common Statistical Ensembles
constant NVE (microcanonical)
constant NVT (canonical)
constant NPT (isothermal-isobaric)
constant VT (grand canonical) [ = chemical potential]
Straightforward / standard use of :
MD microcanonical (energy conservation of classical mech.)
MC canonical (Metropolis algorithm)
Statistical average (constant temperature)
A(q, p)) =
1
Q
_ _
dp dq A(q, p) e
H(q,p)/k
B
T
where Q is the partition function : Q
_ _
dp dq e
H(q,p)/k
B
T
6.4.2 Temperature and Pressure from MD
Equipartition theorem :

p
2
i
2m
i
) =
k
B
T
2
( = x, y, z)
i.e. average kinetic energy of k
B
T/2 for each degree of freedom
The kinetic temperature is thus computed in the classical MD by
T =
1
3Nk
B

i
p
2
i
m
i
)
Pressure from MD : PV = Nk
B
T +W)
Virial : W
1
3
N

i=1
r
i
f
int
i
_
f
int
i
=
V
r
i
_
18
6.4.3 Constant NVT MD

Extended system method (Nose-Hoover)


Couple with external heat bath friction parameter (ctious mass Q)
r
i
=
p
i
m
i
, p
i
= f
i

Q
p
i
=
p

Q
, p

=
N

i=1
p
2
i
m
i
N
f
k
B
T
(N
f
= number of (unconstrained) degrees of freedom = 3N N
c
)
proven to generate canonical ensemble
Conserved quantity (for coding checks) :
E =

i
p
2
i
2m
i
+V (r) +
p
2

2Q
+N
f
k
B
T
6.4.4 Constant Pressure MD

Constant pressure P in simulation V olume control (as a dynamical variable)


coordinate scaling : r
i
= V
1/3
s
i
extended Lagrangian : (Q = ctious mass for V )
L =
1
2

i
m
i
v
2
i
V (r) +
1
2
Q

V
2
PV
Euler-Lagrange eq :
d
dt
L
q

L
q
= 0 for q = s
i
, V
s
i
=
1
V
1/3
f
i
m
i

2
3V
s

V ,

V =
1
Q
(T P)
T
1
3V
(

i
m
i
v
2
i
+W) (W = virial)
7 Data Analysis
7.1 Static Properties
Ergodic hypothesis : Statistical ensemble average = average over long time
A) =
1

run
_
run
0
A(t) dt =
1
N
run
Nrun

k=1
A
k
where
run
= N
run
t (N
run
steps, A
k
= A(kt))
RMS deviation =
_
A
2
) (A A A))
19
Practical
Method 1: save the whole sequence of A(t) analyse after simulation
Method 2: on-the-y summing up.
For RMS, we sum up A = A A). However, we dont know A) until the end of
simulation. The following conversion is then useful:
A
2
) = (A A))
2
) = = A
2
) A)
2
sum = 0 ; sum2 = 0
do istep = 1, nstep ! overall simulation steps

call calculate quantities(A)
sum = sum + A
sum2 = sum2 + A
2

end do
average = sum/nstep
variance = sum2/nstep - average
2
RMS =

variance
7.2 Radial disribution function
Example : Ion pair in water
20
7.3 Time correlation function
correlation between points with time interval
C
AA
() = A(0)A()) =
1
t
run

_
trun
0
A(t) A(t +) dt
C
AA
(0) = 1
For random motions (e.g. in liquids)
C
AA
(t) 0 as t + (loss of correlation)
For regular motions (eg. harmonic oscillators / phonons in perfect solids)
x(0)x(t)) = x(0)
2
cos t
For (slightly) disordered set of oscillators dephasing
x(t) =

i
c
i
x
i
(t) =

i
c
i
x
i
(0) cos t
21
Example : Dielectric response of water
7.3.1 Relaxation Phenomena
Onsagers Regression Hypothesis
A(t) A)
A(0) A)
=
A(t)A(0))
A
2
)
non-equil. response equil. correlation
decay of (experimentally) decay of spontaneous
prepared non-equil. state uctuation in equil.
Microscopically proven by Fluctuation-dissipation theorem in the Linear-Response limit
7.3.2 Various Applications
Transport properties (Diusion constant) velocity TCF
Microwave & IR spectra dipole TCF
Electronic spectra transition dipole TCF
Electron / excitation transfer rates TCF of energy gap (Fermi Golden Rule)
Chemical reaction rates ux-ux TCF
cf. G C Schatz & M A Ratner, Quantum Mechanics in Chemistry (Prentice Hall, 1993)
Diusion constant mean-squares displacement (Einstein relation)
[r(t) r(0)[
2
) = 6Dt
22
Relation to velocity TCF :
r(t) r(0) =
_
t
0
v()d 6Dt =
_
t
0
d
1
_
t
0
d
2
v(
1
)v(
2
))

t
both sides
6D = 2
_
t
0
dv()v(t)) = 2
_
t
0
dv(0)v(t ))
TCF depends only on time interval (in stationary equilibrium)
Changing the integration variable from to

t ,
D =
1
3
_
t
0
d

v(0)v(

))
8 Monte Carlo Simulation
Monte Carlo integration random sampling
I =
_
1
0
f(x)dx
1
N
sample
N
sample

i=1
f(x
i
)
x
i
= uniform random numbers in [0, 1]
Statistical average (constant temperature) Phase-space integration
A(q, p)) =
1
Q
_ _
dp dq A(q, p) e
H(q,p)/k
B
T
where Q is the partition function : Q
_ _
dp dq e
H(q,p)/k
B
T
For momentum-independent quantities A(q) (with H =

i
p
2
i
/2m
i
+V (q))
Conguration space integration
A) =
1
Z
_
dqA(q)e
V (q)/k
B
T
where Z =
_
dqe
V (q)/k
B
T
8.1 Standard MC (Metropolis algorithm)
Metropolis MC
generates congurations R in canonical ensemble
Core algorithm :
23
do istep = 1, nstep ! overall Monte Carlo steps

R
new
= R
old
+R ! trial move
call calculate potential(V (R
new
))
V = V (R
new
) V (R
old
)
! Metropolis test
if ( V < 0 ) then ! R
new
is more stable
accept R
new
else if ( e
V/k
B
T
> random number ) then ! R
new
is less stable but thermall acceptable
accept R
new
else
reject R
new
! stay at R
old
end if

end do
Statistical average is calculated by:
A) =
1
N
step
Nstep

i=1
A(R
i
)
MC vs MD
direct generation of canonical ensemble (straightforward MD microcanonical)
Phase-space / congration-space average (MD analysis time average assuming the
ergodic hypothesis)
no need to evaluate forces
no time evolution
8.2 Umbrella Sampling Technique
Finite length of MC sampling
the system may be trapped in local potential minima.
In order to extend the sampling to high potential (unstable) congurations, we augment a
bias (weight / window / umbrella) potential W(q) to the original (unbiased) potential V (q).
The statistical average obtained from this biased simulation is
A(q))
w
=
1
Q
w
_
dq A(q) e
(V (q)+W(q))
_
Q
w
=
_
dq e
(U(q)+W(q))
_
24
The statistics of the original (unbiased) system are reproduced by
A(q))
0
=
1
Q
0
_
dq A(q) e
V (q)

Q
w
Q
w
=
Q
w
Q
0
1
Q
w
_
dq A(q) e
+W(q)
e
(V (q)+W(q))
=
Q
w
Q
0
A(q) e
+W(q)
)
w
=
A(q) e
+W(q)
)
w
e
+W(q)
)
w
= e
W(q)
)
0
Ae
+W(q)
)
w
9 Free Energy Surfaces
Remember that the Gibbs free energy is related to the equilibrium constant and thus the
probability distributions of the reactant and product species. For example, for A

B,
e
G/k
B
T
= K =
[B]
[A]
=
Prob. B
Prob. A
This would suggest the following generalization to more general states of the system
G G
2
G
1
= k
B
T ln
_
Prob. State 2
Prob. State 1
_
Now, let X be some coordinate(s) of the system. This may be a position coordinate itself
or a function of positions. The free energy curves or surfaces along X can be dened and
calculated from the probability distribution of P(X)
G(X) = k
B
T ln P(X) or G(X
2
) G(X
1
) = k
B
T ln
_
P(X
2
)
P(X
1
)
_
For example, when P(X) is a Gaussian distribution
P(X) e
aX
2
then the free energy curve G(X) is a harmonic potential
G(X) = k
B
T ln e
aX
2
+C = k
B
TaX
2
+C
where C is just a constant coming from the normalization factor of P(X). It is very straight-
forward to calculated P(X) from simulations. The umbrella sampling method discussed in
the previous section can be employed for high energy (less probable) regions along X.
25

Potrebbero piacerti anche