Wang-Landau Monte Carlo simulation -...
Transcript of Wang-Landau Monte Carlo simulation -...
Outline
1. Statistical thermodynamics, ensembles
2. Numerical evaluation of integrals, crude Monte Carlo (MC)
3. MC methods for statistical thermodynamics, Markov chains
4. Variants of MC methods
Statistical thermodynamics
microscopic description
(positions, momenta, interactions)
macroscopic description
(pressure, density, temperature)
Statistical thermodynamics
microscopic description
(positions, momenta, interactions)
macroscopic description
(pressure, density, temperature)
Statistical Thermodynamics
Statistical thermodynamics
time averages Molecular Dynamics methods
0
0
1lim ( ( ), ( )) d
t
K Kt
B b b r t p t t
Statistical thermodynamics
ensemble averages Monte Carlo methods
1
1
1
( , ) ( , ; , , ) d, ,
( , ; , , ) d
K K K K n
n
K K n
b r p r p A AB A A b
r p A A
Statistical thermodynamics
Statistical (Gibbs) ensembles
microstate (m-state): positions and momenta
macrostate (m-state): macroscopic variables (A1, … )
1 m-state = many m-states Gibbs ensemble
probability distribution in system’s phase space, 1( , ; , , )K K nr p A A
Statistical thermodynamics
Examples of statistical (Gibbs) ensembles
micro-canonical: N, V, E
canonical: N, V, T
grand-canonical: m, V, T
isobaric-isothermic: N, P, T
etc.
Statistical thermodynamics
B
( , ; ), ; , , exp N K K
K K
H r p Vr p N V T
k T
Canonical Gibbs ensemble
2
1
( , ; ) ( ; )2
NK
N K K KK K
pH r p V W r V
M
Statistical thermodynamics
1
1
1
( , ) ( , ; , , ) d, ,
( , ; , , ) d
K K K K n
n
K K n
b r p r p A AB A A b
r p A A
Statistical thermodynamics as a mathematical problem
3 3 3 3
1 13
1d d d ...d d
!hN NN
r p r pN
Statistical thermodynamics
con con 1 1
1 kin
con 1 1
( ) ( ; , , ) d d, ,
( ; , , ) d d
K K n N
n
K n N
b r r A A r rB A A B
r A A r r
Statistical thermodynamics as a mathematical problem
Canonical ensemble
con
B
( ; ); , , exp N K
K
W r Vr N V T
k T
Numerical evaluation of integrals
1 1
( ) d ( ) ( )b n n
jj ja
b a b a b af x x f a j f x
n n n
1D – rectangular, trapezoid, Simpsonian, …
Numerical evaluation of integrals
1 1
( ) d ( ) ( )b n n
jj ja
b a b a b af x x f a j f x
n n n
1D – rectangular, trapezoid, Simpsonian, …
1D – crude Monte Carlo
1
( ) d ( )b n
jja
b af x x f x
n
xj randomly distributed over [a,b]
Numerical evaluation of integrals
Localized functions
( ) ( )db
a
f x x x
Solution: Sampling from (x).
Numerical evaluation of integrals
Localized functions
( ) ( )db
a
f x x x
Solution: Sampling from (x).
How? Markov chains.
Markov chains
(e) (e)
1 , , n
Simplified example – finite number of microstates
microstates: s1, …, sn
probability distribution: 1 = (s1), …, n = (sn)
equilibrium PD:
Markov chains
Simplified example – finite number of microstates
Markov chain: s(1) s(2) … s(N) …
s(K) is from {s1, …, sn}
transition probabilities: ik = ik (stochastic matrix)
1
0 1
1
ik
n
ikk
Markov chains
( 1) ( )
1
nN N
k i ikj
Simplified example – finite number of microstates
PD function evolution:
( 1) ( )N N
equilibrium PD function:
(e) (e)
1
n
k i ikj
(e) (e)
Markov chains
Simplified example – finite number of microstates
Theorem:
Let s(1), s(2), …, s(N), … be an infinite Markov chain of microstates generated using a
stochastic matrix ik . Then the generated microstates are distributed over the state
space of the system in congruence with the equilibrium PD function of the used stochastic matrix.
Remarks:
The Markov chain does not represent any physical time evolution.
From the statistical thermodynamics point of view, only the equilibrium PD function is physically relevant. (Not, e. g., the stochastic matrix.)
Markov chains
Construction of an appropriate stochastic matrix
(e) (e)
1 1
, 1n n
k j jk jkj k
Mathematically
2n linear algebraic equations for n2 variables
many (infinite number of) solutions
Markov chains
Construction of an appropriate stochastic matrix
(e) (e)
1 1
, 1n n
k j jk jkj k
Mathematically
2n linear algebraic equations for n2 variables
many (infinite number of) solutions
Detailed balance
(e) (e)
k kj j jk
Markov chains
Decomposition of the stochastic matrix
ik ik ikp a
pik … probability of proposing a particular transition
aik … probability of accepting the proposed transition
Metropolis solution [1]
(e)(e) (e)
(e
(e)
(e))mi min 1, ,n 1
ki ikp pk ki
k ki ki i ik ik ik
ik
kik
ii
p
pap a p a a
[1] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, J. Chem. Phys. 21 (1953) 1087.
Markov chains
Decomposition of the stochastic matrix
ik ik ikp a
pik … probability of proposing a particular transition
aik … probability of accepting the proposed transition
Metropolis solution [1]
(e)(e) (e)
(e
(e)
(e))mi min 1, ,n 1
ki ikp pk ki
k ki ki i ik ik ik
ik
kik
ii
p
pap a p a a
[1] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, J. Chem. Phys. 21 (1953) 1087.
Notice that the PD function need not be normalized!
Metropolis algorithm
… for evaluating configuration integrals of statistical thermodynamics (canonical ensemble)
B
B
( )
con 1
kin ( )
1
( )e d d
e d d
K
K
W r
k T
K N
W r
k T
N
b r r rB B
r r
and, consequently,
( ) ( )( ) ( )( )
B B B B
( ) (e)(e)
(e)e e e e
k ii i kiK
W WW WW r
k T k T k T k Tki
i
Metropolis algorithm
from the configuration generated in the preceding step, r(i), generate a new one using an algorithm obeying that the probability for old new is the same as for
new old (usually isotropic translation and/or rotation)
if W (new)<W (i) then r (i+1) = r (new)
if W (new)>W (i) then
r (i+1) = r (new) with probability
r (i+1) = r (i) with probability
repeat until “infinity” (convergence)
( new)B/
eiW k T
( new)B/
1 eiW k T
Metropolis algorithm
Then
B
B
( )
con 1 ( )
con( )1
1
( )e d d 1lim ( )
e d d
K
K
W r
k Tn
K N iKW r n
ik T
N
b r r rb r
nr r
Metropolis algorithm
Then
B
B
( )
con 1 ( )
con( )1
1
( )e d d 1lim ( )
e d d
K
K
W r
k Tn
K N iKW r n
ik T
N
b r r rb r
nr r
Canonical (or NVT or isochoric-isothermic)
Monte Carlo
A general sketch of an MC simulation
propose an initial configuration
employ the Metropolis algorithm to generate as many as possible further configurations
collect necessary data along the MC path to evaluate ensemble averages
calculate the averages as simple arithmetic means of the collected data
Literature
M. LEWERENZ, Monte Carlo Methods: Overview and Basics, in NIC Series Volume 10:
Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms, NIC Juelich 2002 (http://www.fz-juelich.de/nic-series/volume10/volume10.html)
M. P. ALLEN, P. J. TILDESLEY, Computer Simulation of Liquids, Oxford University Press,
Oxford 1987.
D. P. LANDAU, K. BINDER, A Guide to Monte Carlo Simulations in Statistical Physics, Cambridge University Press, Cambridge 2000 and 2005.
Appendix
Simulated annealing [1] and basin hopping [2]
MC simulation at a high temperature
equilibrium structure for T = 0 K
T
0 K
sim
ula
ted
an
ne
ali
ng
gra
die
nt
optim
ization
ba
sin
ho
pp
ing
[1] S. Kirckpatrick, C. D. Gellat, M. P. Vecchi, Science 220 (1983) 671; [2] D. J. Wales, H. A. Scheraga, Science 285 (1999) 1368.
Problem: low temperatures
Simulation at low temperatures – combination of integration and optimization
Molecular clusters, biomolecules:
Complicated energy landscapes
A lot of minims separated by large energy barriers
con con 1 1
1 kin
con 1 1
( ) ( ; , , ) d d, ,
( ; , , ) d d
K K n N
n
K n N
b r r A A r rB A A B
r A A r r
IF , then mean value of B can be expressed as a 1D integral where is classical density of states How to calculate ? Wang-Landau algorithm
con( ) ( )K Kb r b W r
Mean value of variable B
min
min
con con 1
1 kin
con 1
( ) ( ; , , ) ( )d, ,
( ; , , ) ( )d
nW
n
nW
b W W A A W WB A A B
W A A W W
( )W
1 1
1
1
, , ; ( , , ) ,
0
1
, ,
1 d d
( ) lim
1 d d
N N
N
N
r r W r r W W W
W
N
r r
r r
W
r r
( )W
Calculation of Ω Wang-Landau algorithm – basic idea
- Completaly random walk in configuration space => energy
histogram h(W)~Ω(W), but low energy configurations will not be
visited
- Metropolis algorithm: configurations are sampled by ρ(Ei(r), =>
energy histogram h(W)~Ω(W) x ρ(W(r))
- If we use ρ(Wi(r)) = 1/(Ω(W)), then energy histogram will be
constant function
Wang-Landau algorithm – calculation of density of states
- Start of simulation: Ω(W) is unknown, we start with constant Ω(W) = 1
and from random initial configuration r
- Following configuration r2 created by small random change of foregoing
configuration r1 is accepted with probability
- Visiting of energy Wi: change of Ω(Wi) → Ω(Wi) x k0,where modification
factor k0 >1.
- e. g., k0 =2, big k → big statistical errors, small k → long simulation
- After reaching of flat energy histogram h(W), modification factor is
changed e.g. by rule , energy histogram is deleted (density of
states isn‘t deleted!) and we start a new simulation
- Computations is finished, when k reaches predeterminated value (e. g.
1,000 001)
min 1, 1,
new new old
newold old
W r W
WW r
1j jk k
Wang-Landau algorithm – accuracy of results depends on:
- Final value of modification factor kfinal
- Softness of deals of energy
- Criterion of flatness of histogram
- Complexity of simulated system
- Details of implemented algorithm
Wangův-Landau algorithm: parallelization
- MPI, each core has one own walker, all cores share actual density
of states and actual energy histogram