Introduction - Trinity College Dublin

Post on 18-May-2022

2 views 0 download

Transcript of Introduction - Trinity College Dublin

1

3007 Statistical Thermodynamics – Dr. Marc in het Panhuis

Introduction What do we mean by Statistical Thermodynamics? And what is the connection between Statistical Mechanics and its macroscopic counterpart Thermodynamics. Statistical Mechanics: Investigate a system at microscopic level, ie looking at atoms, particles, ….. Thermodynamics: Investigate system at a macroscopic level, ie from N, V and T we can extract P – Thermodynamics does not give numerical results!! Statistical Thermodynamics: Obtaining thermodynamic properties as a function of its characteristic variables using microscopic properties of the system.

2

I. Review of Thermodynamics We will start with a short review of thermodynamics. There should not be any new material in this, and it should be regarded as a refresher of what has dealt with during course 2006 Kinetic Theory and Thermodynamics in the Senior Freshman year. A macroscopic system has many degrees of freedom, of which we can only measure a few. Thermodynamics is concerned with the relation between a small number of variables which are sufficient to describe the bulk behaviour of the system in question. For example to describe a liquid we would require the following thermodynamic variables: pressure P, volume V, and temperature T. Steady state, equilibrium, state function, intensive (independent of system size) and extensive (proportional to the size of the system) variables. Thermal, mechanical and chemical equilibrium.

3

In equilibrium the state variables are not all independent and are connected by Equations of State, such as: Ideal gas Law: PV Nk TB− = 0 (1.1) and the van der Waals equation. Zeroth Law: If system A is in equilibrium with systems B and C, then B is in equilibrium with C. First Law: Internal energy E is extensive and conserved. It partitions the change in energy of a system into two pieces: dE dQ dW= − (1.2) where dE is the change in internal energy of the system, dQ the amount of heat added to the system and dW the amount of work done by the system during an infinitesimal process. Spontaneous, adiabatic, isothermal, isobaric, reversible, irreversible processes.

4

Second Law: There exists an extensive state function entropy S, which is monotonically increasing function of energy E. Entropy: This is a very important concept!!! The entropy of a closed system will never decrease, and will increase whenever possible. Number of microstates in a macrostate (elaborate on this) Examples: Poker, box of air, fridge The second law also states that for an infinitesimal reversible process at temperature T, the heat given to the system is dQ TdSreversible = (1.3)

5

while for an irreversible process dQ TdSirreversible ≤ . (1.4) Another equivalent statement of the second law was given by Kelvin: There exists no thermodynamic process whose sole effect is to extract a quantity of heat from a system and convert it entirely to work. As a consequence of the above, the most efficient engine operating between two reservoirs at temperatures T1 and T2 is the Carnot engine. The Carnot engine is an idealised engine in which all the steps are reversible. Third Law: Entropy change associated with any isothermal, reversible process of a condensed system approached zero as the T ? 0. Thermodynamic Potentials:

6

There are many thermodynamic potentials, here we will examine those that are most useful from a statistical point of view. Helmholtz free energy A A E TS= − (1.5) The quantity A is a state function with differential dA dE TdS SdT= − − (1.6) Gibbs free energy G G A PV= + (1.7) Similar to the Helmholtz free energy, we can define a differential of the state function G, and perform a general analysis similar to what was done for A. Gibbs-Duhem and Maxwell relations: The Gibbs-Duhem equation:

E S x N TS X x Ni j i i j jji

, ,k p n sd i = + + ∑∑ µ (1.8)

7

Maxwell relation For a PVT system if follows from equation (1.6)

∂∂

FH IK = −AT

SN V,

(1.9)

and similar relations. From theory of partial differentiation it follows that if F is a single valued function of independent variables x1, x2, …., xn, then

∂∂

∂Φ∂

FHG

IKJ =

∂∂

∂Φ∂

FHG

IKJx x x xi j j i

(1.10)

Applying this to equation (1.9) gives us the Maxwell relations:

∂∂

FH IK =∂∂

FH IKSV

PTT N V N, ,

(1.11)

and similar relations. The same can be done for the Gibbs potential equation (1.7).

8

Response Functions: A lot can be learned about a macroscopic system through its response to various changes in externally controlled parameters. For example the important response functions for a PVT system are the specific heats at constant volume and pressure,

CdQ

TT

STV

V V=

∂FH IK =

∂∂

FH IK (1.12)

CdQ

TT

STP

P P=

∂FH IK =

∂∂

FH IK (1.13)

and the coefficient of thermal expansion

α =∂

FH IK1V

dVT P N,

(1.14)

Let us now derive relationship between these response functions using Maxwell relations (equation (1.11) ) and the chain rule,

9

∂∂

FH IK∂∂

FH IK∂∂

FHG

IKJ = −

zx

yz

xyy x z

1 (1.15)

which is valid for any three variables obeying an equation of state of the form f(x,y,z)=0, to obtain

C C TPV

VT

TVKP V

T P T− = −

∂∂

FH IK∂∂

FH IK =2

2α . (1.16)

Conditions for Equilibrium and Stability: Lets take two systems in contact with each other, for which we allow heat to flow freely between the systems, and the volumes are not separately fixed. Then this will evolve such that the pressure and temperature of both systems becomes the same. We can easily obtain this conclusion from the principle of maximum entropy. Assume that the two systems have volumes V1 and V2, energies E1 and E2, and that number of particles in each system and the combined energy and total volume are fixed. Then in equilibrium, the total entropy S S E V S E V= +1 1 1 2 2 2, ,a f a f (1.17)

10

must be a maximum (remember 2nd Law).

Summary:

11

II.Statistical Mechanical Ensembles

We will now develop the foundations of equilibrium statistical mechanics. This is in effect where the course really takes off! Three ensembles will be treated, Microcanonical, Canonical and Grand Canonical. Should we do the statistical method and ensembles first? Microcanonical Ensemble Definition: the microcanonical ensemble is the assembly of all states with fixed total energy E and fixed size usually specified by number of molecules N and volume V. To understand this definition we need to look at some basic concepts and ideas of the Statistical Method and Ensembles. Consider a system of 3N degrees of freedom described by (canonical) variables:

12

(q3N,p3N) = q1, …, qj, …, q3N, p1, …, pj, …, p3N. This collection of variables (q3N,p3N) is called a point in phase space. Points in phase space characterise completely the microscopic (mechanical) state of a classical system, and flow in this space is determined by the time integration of Newton’s equation of motion, F=ma. Where the initial phase space point is provided by the initial conditions. Thus, once the initial state is specified, the state at all future times is determined by the time integration of Newton’s law. The time integration specifies the time evolution (the trajectory) of this many body system. We can think of this as a line in phase space. To prepare the system for this trajectory a small number of variables is controlled. These constraints (take NVE) cause the trajectory (time evolution) to move on a “surface” of phase space. A basic concept in statistical mechanics is that if we wait long enough, the system will eventually

13

flow through (or arbitrarily close) to all the microscopic states consistent with the constraints imposed to control the system. Lets assume this and imagine that the system is continuously flowing through phase space as we perform a multitude M independent measurements on the system. Then the observed value from these measurements for some property G is

GM

Gobs aa

M=

=∑1

1, (2.1)

where Ga is the value during the ath measurement whose time duration is so short that the system can be considered to be in only one microscopic state. This sum can then be partioned as

GM

GobsM

=FHG

IKJ

LNM

OQP∑ 1 number of times state is

observed in the observations

υ

υυ

, (2.2)

where G Gυ υ υ= is the expectation value for G when the system is in state ?.

14

Remember we assumed that after a long enough time, all states are visited. Thus, the term in square brackets (in equation (2.2) ) is the probability or weight for finding the system during the course of the measurements in state ?. We can then write

G P G Gobs = ≡∑ υ υυ

(2.3)

where P? is the probability or fraction of time spent in state ?. G is called an ensemble average.

An ensemble is the assembly of all possible microstates – all states consistent with the constraints with which we characterise the macroscopic state of our system. The above is a clear demonstration of Statistical Thermodynamics! And we are now in the position to start to understand the definition of the microcanonical ensemble. What equation (2.3) tells us is that the time average is the same as the ensemble average.

15

This is one of the primary assumptions of Statistical Mechanics. Dynamical systems that observe this equality are known as ergodic systems, which arises from the view that time averages are performed over a long time and thus eventually will visit all microscopic states consistent with the imposed constraints. Give example of non-ergodic system! So, what we are saying is that, if the observation is carried out over a long time is actually the same as the average over many independent observations. Where long time refers to a duration much longer than the relaxation time of the system. A system is chaotic at molecular level, which leads to the concept that after some period of time – a relaxation time, t relax – the system will lose all memory (ie correlation with) of its initial conditions. Demonstrate this with example. Therefore, assuming our measuring time t measure=Mt relax the measurement actually corresponds to M independent observations. This gives

16

G GM

Gmeasure aa

M= =

=∑1

1 (2.4)

where Gmeasure is the time average value of G during period tmeasure. We are ready to examine the microcanonical ensemble in more detail.

17

The Microcanonical Ensemble What follows is more or less a summary of what we have done before. The basic idea of statistical mechanics is that during a measurement every microscopic state or fluctuation that is possible does in fact occur, and ….. observed properties are averages from all the microscopic states. To give some meaning to this idea, we need to know something about the probability (or distribution) of the various microscopic states. The information about the probability can be obtained from an assumption about the behaviour of many-body systems (remember 1 mol contains NAV molecules!) So, we restate the definition of the microcanonical ensemble: For an isolated system with fixed total energy E, and fixed size (specified by N and V) all microscopic states are equally likely at thermodynamic equilibrium.

18

To say this in other words, the macroscopic equilibrium state corresponds to the most random situation (remember 2nd Law) – the distribution of microscopic states with the same energy and system size is entirely uniform. Remember what we said about the tame mouse going from tile to tile on the floor belonging to the microstate with N, V and U = E < U + e . This indicates the behaviour of system flowing through the microscopic states consistent with the constraints we have imposed on the system. Lets look again (we did this already in the previous lecture) at the definition of Entropy (according to Boltzmann) S k N,V, EB= ln ( )Ω (2.5) Ω( )N,V, E is the number of microscopic states with N, V and energy between U and U+ e. The entropy defined is this way is extensive since if the total system were composed of two subsystems A and B, with number of states OA and OB, the total number would be Ω Ω ΩA B A B+ = (2.6)

19

and S kA B B A B+ = ln( )Ω Ω (2.7) Now imagine dividing the system with fixed N,V,E into two subsystems and constrain the partitioning in the following manner: System 1: N1, V1, E1 System 2: N2, V2, E2 Now, any partitioning is a subset of all allowed states with (N,V,E), and therefore the number of states with this partitioning, system 1 or 2, is less than the total number of microstates. Thus, Ω Ω( ) (N,V,E N,V, E;internal constraint)> (2.8) and therefore S(N,V,E) S(N,V,E internal constraint)> ; (2.9) This inequality is the second law, and let us now look at its statistical meaning: The maximisation of entropy coinciding with the attainment of equilibrium corresponds to the maximisation of disorder or molecular

20

randomness. The greater the microscopic disorder, the larger the entropy. We know that the temperature can be determined from the derivative

∂∂

FH IK =SE TN,V

1 (2.10)

Using equation (2.5) we can the define

β = =∂

∂FH IK

1k T EB N,V

ln Ω (2.11)

Using the thermodynamic condition that temperature is postive (on Kelvin scale), requires that Ω( )N,V, E be a monotonic increasing function of energy E. ß is known as the inverse temperature. For macroscopic systems encountered in nature, this will always be the case. Think of water and ice as example. Example:

21

NVE ensemble in Molecular Dynamics Simulation So far, the microcanonical ensemble. The Canonical Ensemble By now it well known among us that in microcanonical ensemble the macroscopic variables are N,V, E. As was shown in the first lectures, review of thermodynamics, it is often convenient to employ other variables. We saw already that changes between variables can be made using Legendre Transformations. Remember change from internal energy to Helmholtz (A) and Gibbs (G) Free Energies. On a macroscopic level we change the thermodynamic variables: N, V, S ? N, V, T while on a microscopic level this means changing between ensembles. Lets now change to the canonical ensemble:

22

the assembly of microstates with fixed N and V. The system is kept in equilibrium by being in contact with a heat bath at temperature T (or inverse temperature ß). This implies that the energy is allowed to fluctuate!

A canonical ensemble as a subsytem to microcanonical subsystem.

To start consider that the energy of the bath EB is overwhelmingly larger than the energy of the canonical system E?, where label ? indicates the specific state of the system. The bath is so large that the energy levels of the bath are a continuum and dO/dE is well defined. The energy in the system is allowed to fluctuate, because the system is in contact with the bath and can exchange heat, but

Bath EB

System E?

23

the sum of energies is a constant, hence E E EB= + υ (2.12) Assume now that the system is in one specific state ?, then we know that the number of microstates accessible to the system plus bath is O(EB) = O(E-E?). Then, according to the statistical postulate – the principle of equal weights – we find that the equilibrium probability for observing the system in state ? obeys: P E E E Eυ υ υ∝ − = −Ω Ω( ) exp[ln ( )] (2.13) Since we know that E? << E, it is allowed to expand ln O(E-E?) in the Taylor series

ln ( ) ln ( )ln

....Ω ΩΩ

E E E EE

− = − FH IK+υ υd

d (2.14)

Here we have expanded lnO(E) rather than O(E), because the former is relatively well behaved function, this follows from Boltzmann’s definition of entropy.

24

Now, by only using those terms explicitly shown in the expansion, equation (2.14), and using ∂ ∂ =ln /Ω E N,Va f β (show this), we obtain

P Eυ υβ∝ −exp( ) (2.15) which is the Boltzmann (or canonical) distribution law. The constant of proportionality is independent of the specific state of the system and is determined by the normalisation requirement

Pυυ∑ = 1 (2.16)

This is just the total of probabilities is equal to unity. Hence, from this it follows P Q Eυ υβ= −−1 exp( ) (2.17) where we can now define the canonical partition function Q according to: Q N,V E( , ) exp( )β β ν

ν= −∑ (2.18)

25

The canonical partitioning function depends on N and V, because E? depends on these variables. As an example lets calculate the internal energy E(N,V,ß) in the canonical ensemble. Hence we want the ensemble average <E>

E E P EQ

N,V= = = = −

∂∂

FHG

IKJ∑υ υ υ

υ β....

ln (2.19)

Work this out in detail! Equation (2.19) suggests that lnQ is a familiar thermodynamic function. But how can we see this? Start by looking at the Legendre transformation of the internal energy, which gave us the Helmholtz Free Energy A (N,V,T): A=E-TS and use the differential to show that

∂∂

FH IK =∂∂

FHG

IKJ

A TT

E = -lnQ

N,V N,V

//1 β

(2.20)

This relationship presents us with enough information to calculate all thermodynamic properties in a (N,V,T) ensemble, since we have

26

related the free energy A to the canonical partition function. Equation (2.20) suggests that AT

k Q= − ln (2.21)

Hence A = -kTlnQ Other thermodynamic properties such as, for example, the pressure in the canonical ensemble is defined by

p k TlnQVB

N,T=

∂∂

FH IK (2.22)

Exercise: Show that microcanonical and canonical ensemble are in fact equivalent.

27

Please note that the canonical distribution law, equation (2.17) and the partition function equation (2.18) are defined for quantum states! Now, let us see if we can prove that the microcanonical and canonical ensembles are in fact equivalent. First of all what is the question? The main difference between the ensembles is that energy fluctuates in the canonical ensemble, while it is a constant in the microcanonical. Hence, we know now what the question is: To show that the calculated internal energy is equivalent in both ensembles. Take the same system as we have treated in the last lecture. This system is quantal and obeys Schrödingers equation:

28

H EΨ Ψυ υ υ= (2.23) remember the state at all future times is determined by the time integration of this equation – in classical terms this represents points in phase space. Back to answering our question, what we need to show is that the fluctuations in the internal energy calculated using the canonical ensemble are neglible small. Hence we calculate the averaged square fluctuations in the canonical ensemble:

∂ = −

−FHG

IKJ

− ∂ ∂

∑ ∑

E E E

E E

E E

E N V

a f a f

a f

2 2

2 2

22

=

= P P

=

υυ

υ υυ

υ

β/ ,

(2.24)

Using the definition of the heat capacity CV we obtain

∂ =Ea f2 k T CB2

V (2.25)

29

What a result!! – it relates the size of the spontaneous fluctuations in the canonical ensemble to the rate at which energy will change due to fluctuations in the temperature (the heat capacity). This is an example of linear response theory, which will be dealt with later in the course. To return back to our question, we will now calculate the relative r.m.s. value of the fluctuations in the energy and we write:

E E

E EO

N

−= ∝ FH IK

21k T CB

2V (2.26)

The heat capacity is extensive and thus proportional to N, furthermore E is also extensive and thus also proportional to N, hence fluctuations are in the order of N-1. Thus, for large systems the fluctuations are small numbers and the internal calculated in the canonical ensemble will be indistinguishable from that of the microcanonical ensemble. Example: Suppose 1 mol of gas N~1023, then fluctuation are ~10-12.

30

Another simple example Consider a system of N distinguishable independent particles. Each particle can exist in one or two states, separated by an energy ε. The state ν can be specified according to υ = =( , ,..., ,..., ),n n n n nj N j1 2 0 1 or , where nj gives the state of the particle j.

And the energy for this given state is E n jj

N

υ ε==∑

1.

In order to calculate thermodynamic properties we start with applying the microcanonical ensemble, the degeneracy of the mth energy level is

Ω( , )!

( )! !E N

NN m m

=−

(2.27)

where m=E/ε. Then we can use the definitions discussed in the previous lectures and immediately write down the entropy: S E N/ ln ( , )kB = Ω and the temperature

31

β

ε

= ∂ ∂

∂ ∂

ln /

ln /

Ω

Ω

E

m

N

N

a fa f = -1

(2.28)

For this to have a meaning we need to use the continuum limit of factorials or Stirling’s approximation lnM! ≈ MlnM-M, which becomes exact in the limit of large M. Then we find for the inverse temperature:

βε = −FH IKlnNm

1 (2.29)

or mN

=+

11 exp( )βε

(2.30)

As a result of this the energy E=mε as a function of temperature is

E N=+

εβε

11 exp( )

(2.31)

this is zero at T=0 (only the ground state is occupied). So, what about the canonical ensemble?

32

For that we use the link to thermodynamics as developed previously: − = = −∑β β υ

υA Q Eln ln exp( ) (2.32)

Using the information about Eν we get for the normalisation of the partition function

Q N n jj

N

n n nN

( , ) exp, ,..., ,

β β ε= −LNMM

OQPP==

∑∑10 11 2

(2.33)

this exponential factors into an uncoupled product

Q N Enj

N

N

j

( , ) exp( )

exp

,β β

βε

υ= −

+ −

==∑∏

0 11

1 = b g (2.34)

As a result we find − = + −β βεA N ln exp1b g from which we obtain the internal energy

EA

NN

=∂ −∂ −

FHG

IKJ = + −β

βε βε

a fa f b g1 1exp (2.35)

33

which is in precise agreement with the result obtained from the microcanonical ensemble. If you are still there, we will now consider a more general approach to dealing with ensembles. Generalised Ensembles Let us now consider in a general way, why changes in ensembles correspond thermodynamically to performing Legendre transforms of the entropy. Start with X denoting the mechanical extensive variables, and we may write for the entropy S E X= kB ln ( , )Ω , and the differential 1 / kB = +β ξdE dX (2.36) Now imagine an equilibrated system in which E and X are allowed to fluctuate.

EB,XB

Eν,Xν

34

This figure shows an open system in contact with a bath and energy and (if we take X=N) particles between the two. Now derive the probability for the microstates in the same manner as we did for the canonical distribution law and you will find P E Xυ υ υβ ξ= − −exp /a f Ξ (2.37) and the partition function Ξ is given by Ξ = − −∑exp β ξυ υ

υE Xa f (2.38)

In similar fashion we then find the thermodynamic variable E and X with the following averages

E P EY

= =∂∂ −LNM

OQP∑ υ υ

ξυ βln

,

Ξa f (2.39)

X P XY

= =∂∂ −LNM

OQP∑ υ υ

βυ ξln

,

Ξa f (2.40)

where Y refers to all the extensive variables that are not fluctuating in the system.

35

Take the derivative of the general partition function d E d X dln Ξ = − −β ξ . Consider the quantity D P P= − ∑kB υ

υυln (2.41)

make a few substitutions in order to obtain D P E X

E X

= − − − −

+ +

∑k

= k

B

B

υυ

υ υβ ξ

β ξ

ln

ln .

Ξ

Ξk p (2.42)

Therefore we can say that D/kB is the Legendre transform that converts ln Ξ to a function of E and X ; d d E d XD = k kB Bβ ξ+ (2.43) This implies that D is the entropy S Thus in general we can use S k P EB= − ∑ υ

υυ (2.44)

which is the famous result for the entropy and it is called the Gibbs Entropy formula

36

The most important example of these formulas is that of the grand canonical ensemble This is an ensemble of all states for an open system of volume V. Energy and particle numbers are allowed to fluctuate from state to state. From the distribution law we find for the probability P E Nυ υ υβ βµ= − +−Ξ 1 exp( ) (2.45) and the Gibbs Entropy formula yields S k E NB= − − − +ln Ξ β βµ (2.46) or if we rearrange this we find ln Ξ = βpV . The partitioning function is given by Ξ = − +∑exp β βµυ υ

υE Na f (2.47)

It depends also on the system volume, because the energy depends on the size of the system. ------

37

So far statistical ensembles, we will return to this later in the course but next up is Quantum Fluids.

38

III. Quantum Fluids In this section we will discuss topics such as condensation of a non-interacting Bose gas (Bose condensation) and superfluidity. To be able to treat these topics we consider the simplest systems treated by statistical mechanics first; Non-Interacting (Ideal) Systems. What are they? Suppose we are considering an open system, where the number of particles is allowed to fluctuate from state to state, then the probability of having precisely N particles is P E NN

υυ

υ υβ µ∝ − −∑ ( ) exp ( ) (3.1)

where the superscript N indicates that the sum is only over those states υ for which N Nυ = . The likelihood of spontaneous fluctuations is governed by the energetics of these fluctuations compared to Boltmann’s Thermal Energy k TB = −β 1.

39

Thus higher T allows for greater fluctuations, what happens at T? 0. Some people say that due to the following complexity statistical mechanics is often regarded as a difficult subject. Complexity: The systematic exploration of all possible fluctuations is a complicated task due to the huge number of microscopic states that must be considered, and the cumbersome detail needed to characterise these states. A practical method for exploring fluctuations is the factorisation approximation, which become exact when the system is composed of non-interacting degrees of freedom. How does it work? Consider that the energy breaks into two parts: E E En mυ = +( ) ( )1 2 (3.2) where the state υ depends on n and m, and these indices are independent of each other. Then we can

40

factor the canonical partition function in the following way Q E

E Enn m

m

= −

− −

exp( )

exp( )exp( )( )

,

( )

β

β β

υυ

= 1 2 (3.3)

This can be factored as

Q E Enn

mm

= exp( ) exp( )( ) ( )−LNM

OQP −LNM

OQP∑ ∑β β1 2 (3.4)

which can be written as Q Q Q= ( ) ( )1 2 . This introduces Q( )1 and Q( )2 as the Boltzmann weighted sums associated with the energies En

( )1 and Em

( )2 . Please notice that these energies are uncorrelated E E E E E En m n m

n m

( ) ( ) ( ) ( ) ( ) ( )

,exp (1 2 1 2 1 2= Q-1 − +∑ β

41

This leads to E E E E( ) ( ) ( ) ( )1 2 1 2= , now please

show this. This approach can be generalised to the case with N uncorrelated degrees of freedom and we find Q Q Q Q N= ( ) ( ) ( )...1 2 (3.5) If each of these degrees of freedom is of the same tye this reduces even further to

Q QN

= ( )1 (3.6)

In some cases the factorisation approximation is applicable because the system is composed of uncorrelated particle, like in a classical ideal gas.

Then the correct partitioning function is 1N

qN

!.

Occupation numbers: The first step to analysing any model involves the classification of microstates.

42

The state of a quantum system can be specified by the wavefunction for that state υ , Ψυ r r rN1 2, ,...,a f, here Ψυ is an eigensolution of the Schrödinger equation for an N particle system. If the particles are non-interacting the wavefunction can be expressed as a product of single particle wavefunctions, φ φ φ1 2( ), ( ),..., ( ),...r r rj Each of these single particle wavefunctions contains a number of particles, ie n1 with φ1 and so on. The numbers n n n j1 2, ,..., ,... are called the occupation numbers of the first, second,…., j-th single particle states. If the N particles are indistinguishable (as quantum particles are) then a state υ is completely specified by the set of occupation numbers ( , ,..., ,...)n n n j1 2 , since any more detail would distinguish between the n j particles in the j-th particle state. Example: Consider three particles which can exist in one of two single particle states.

43

Let us now define two types of particles: Fermions; particles with half-integer spin obey an exclusion principle, n j=0 or 1, only. The statistics associated with these particles is called Fermi-Dirac statistics. Wavefunction is anti-symmetric: Ψ Ψ( ....) ( ....)r r1 1= − . Bosons; particles with integer spin, n j=0, 1, 2, 3, … The statistics associated is called Bose-Einstein statistics. Wavefunction is symmetric: Ψ Ψ( ....) ( ....)r r1 1= . Let us illustrate how to use occupation numbers with some examples. Photon Gas Consider a photon gas, an electromagnetic field in thermal equilibrium with its container. From the quantum theory of the electromagnetic field, it is found that the Hamiltonian can be written

44

as a sum of terms, in the form of a Harmonic oscillator of some frequency. Energy of oscillator: nhω , where n=0,1,2,….. This leads to the concept of photons with energy hω . A state of the free electromagnetic field is specified by the number n for each of the ‘oscillators’ and n can be thought of as the number of photons in a state with ‘particle’ energy hω . Photons obey Bose-Einstein statistics: n=0, 1, 2… The canonical portioning function is given

e Q e eA E n n

n n

− − − + + +

=

∞= = =∑ ∑β β

υ

β ε ευ ( ... )

, ,...,

1 1 2 2

1 2 0

(3.7) here we have used hω εj j= . Since the exponential factors into independent portion we can employ factorisation, and obtain

Qej j

(photon gas) =−

LNM

OQP∏ −

1

1 βε (3.8)

45

Using thermodynamics we can obtain from this equation since Q e A= −β . Of particular interest is the average value of the occupation number of the j state, n j . In the canonical ensemble that is

nn e

eQ

j

jE

Ej

= =∂

∂ −

β

υβ

υ

υ

υ βεln

( ) (3.9)

Using the result for Q, we find the following

n ejj= −

−βε 11 (3.10)

which is called the Planck distribution. Bosons Consider a system of N particles that obey Bose-Einstein statistics and do not interact The canonical partition function is then e Q eA E− −= = ∑β β

υ

υ (3.11)

46

The particles cannot be created or destroyed, thus we must constrain the sum to those states in which the total number of particles is fixed at N

Q en

n n n

j jj

j

=− ∑

∑β ε

1 2, ,..., ,.. (3.12)

ε j denotes the energy of the jth single particle state. This sum in equation (3.12) has the restriction

n Njj

=∑ . This produces a problem of combining

the sum to statisfy this condition. However, by switching to the grand canonical ensemble the restricted sum does not appear e epV E N− − −= = ∑β β µ

υ

υ υΞ ( ) (3.13)

in terms of occupation number we find Ξ = − −∑∑ exp( ( ))

, ,..., ,..β ε µn j j

jn n n j1 2

(3.14)

47

The exponential factors and we can obtain β β µ εpV j

j= = − − −∑ln ln exp( ( ))Ξ 1 (3.15)

The average occupation number is

nn E N

j

j

=− −∑ exp( ( ))β µυ υ

υΞ

(3.16)

We can now show that the average occupation number is (given by the Bose distribution):

n jj

=− −

11exp( ( ))β ε µ

(3.17)

Notice the singularity when µ ε= j!!!!!

n j diverges; a macroscopic number of particles pile into the same single particle state. This is called Bose condensation, the condensation is thought to be a mechanism for superfluidity. Fermions

48

Consider an ideal gas of real Fermi particles. We saw that it is easier to work in the grand canonical ensemble and we find for the partitioning function

Ξ = − −LNMM

OQPP∑∑

=exp ( )

, ,..., ,...β ε µn j j

jn n n j1 2 0

1 (3.18)

For Fermions, n j=0 or 1 only. This relation can be factorised (due to it being non-interacting particles, and we obtain β β µ εpV j

j= = + −∑ln ln[ exp( ( ))]Ξ 1 (3.19)

Once again we can look at the occupation number and we obtain:

n jj j

=∂

∂ −=

− +ln

exp( ( ))Ξ

βε β ε µd i1

1 (3.20)

This is the Fermi distribution. In summary we have

n jj

=− ±

11exp( ( ))β ε µ

49

where the ‘+’ refers to Fermi-Dirac and ‘-’ to Bose-Einstein. Classical Ideal Gases, the Classical Limit Let us now consider what happens to the statistical behaviour of ideal quantum mechanical gases at high temperatures (this is the classical limit). The number of particles is: N n j

j= ∑ and the

average number of particles is

N N ejj j

j= = ±∑ ∑ − −β ε µ( ) 11

(3.21)

the ‘+’ sign is for Fermi-Dirac statistics the ‘-’ is for Bose-Einstein statistics. The thermodynamic density is N V/ , when temperature is high and the density is low. Then the number of particle states accessible is larger than the number of particles.

50

Thus N n jj

= ∑ implies that n j must be small,

or n j <<1, combining this with Fermi-Dirac and Bose-Einstein distributions implies e jβ ε µ( )− >> 1 and we find in the classical limit

n ejj= − −β ε µ( ) (3.22)

In addition to this the chemical potential µ is determined by the condition

N n e ejjj

j= = −∑∑ βµ βε (3.23)

Thus, by combining these results we find for the average number of particles in the jth single particle state

n Ne

ej

j

j

j=

−∑

βε

βε , which is the familiar form of

the classical Boltzmann factor. Now, what about the canonical partitioning function (Q) in the classical limit? Q is related to the Helmholtz Free energy through − =βA Qln ,

51

while the grand canonical partition Ξ is related to βpV through βpV = ln Ξ . Recall that A E TS= − , and the Gibb Free energy is G N E pV TS= = + −µ , from this we obtain ln ( , , ) lnQ N V T N= − +βµ Ξ (3.24) inserting the grand partition functions for ideal fermions or bosons yields

ln ( , , ) ln ( )Q N V T N e j

j= − ± ± −∑βµ β µ ε1

where the upper and lower sign indicate Fermi and Bose particles. Using some head gymnastics (including the expansion of the logarithm ln( ) ...1+ = +x x , we arrive

ln ( , , ) ( )Q N V T N e j

j= − + −∑βµ β µ ε , using

equation (3.23) we get ln ( , , )Q N V T N N= − +βµ , and substituting the result for βµ , so that we find

52

ln ln lnQ N N N N e j

j= − + + −∑ βε (3.25)

Next we can use Stirling’s approximation (which is exact in the thermodynamics limit) and obtain

QN

e j

j

N

=LNMM

OQPP

−∑1!

βε (3.26)

in the classical limit. The factor N ! reflect that the particles are indistinguishable. Thermodynamics of Ideal gas of structureless classical particles Mono-atomic ideal gas

53

54

IV. Linear Response Theory This is the main and most important part of the course! So far, we have used statistical methods to describe reversible time-independent equilibrium properties. Now we are going to move to a different concept and consider irreversible time-dependent non-equilibrium properties. We will discuss systems that are close to equilibrium; in this regime the non-equilibrium behaviour of macroscopic systems can be described by linear response theory. Why linear? It follows from the fact that we are considering systems close to equilibrium. Or to give a proper definition: Deviations from equilibrium are linearly related to the perturbations that remove the system from equilibrium.

55

Why do we want to use linear response theory? Certain processes are too slow to be calculated directly by computational techniques. k Aef

E k Tact B= − / (4.1) Temperature dependent rate (reaction) constant as formulated by van ‘t Hoff and Arrhenius in the 1880’s. Take a simple reaction

A B→ For which we know that the rate constant is 1 s-1, this implies that one molecule of reactant A is turned into one molecule of product B every seconds. So, big deal you might think, well not really, because if you would like to calculate this using molecular dynamics simulation you have a bit of number crunching to do. Example: Isomerisation of calix[4]arene k=200 s-1

56

So, once every 5 10-3 seconds a calyx[4]arene goes from one isomer to another. Again a huge amount of computational time is necessary to simulation this using equilibrium molecular dynamics. Permeation of gases through soap or water films. Before we can start we have to be clear about the meaning of equilibrium, disturbances resulting in non-equilibrium systems. Consider a system consisting of two distinctive parts, detergent solution and vapour. Equilibrium: In time the system under consideration will not change. Apply a finite disturbance: shaking the bottle – result is a system that is disturbed, hence a non-equilibrium system is created. Relaxation of non-equilibrium system to equilibrium

57

Machinery that we need to understand:

• Time correlation functions, Green-Kubo relations, diffusion coefficients.

• Onsager’s regression hypothesis: The

relaxation of macroscopic non-equilibrium disturbances is governed by the same laws as the regression of spontaneous microscopic fluctuations in an equilibrium system ? Lars Onsager 1930 (Nobel Prize 1968).

• Fluctuation-dissipation theorem, Onsager’s

regression hypothesis in terms of statistical mechanics. It connects the relaxation from a prepared non-equilibrium state with the spontaneous microscopic dynamics in an equilibrium system.

• Response functions, how to deal with a time-

dependent perturbation.

• Langevin equation, Brownian motion, random forces and friction (velocity dependent).

58

Read the following carefully, it is very important!

You should not be afraid to ask questions if you think something is incorrect, or if you do not fully understand what is going on. Asking a question is a sign of braveness, not of stupidity!!!!!!

-------------------------------------------------------------- Vector multiplication We have two vectors, A ax ay az= =( , , ) ( , , )1 0 0 B bx by bz= =( , , ) ( , , )111 The dot (or scalar) product: A B axbx ayby azbz• = + + = 1 (a scalar) The cross (or vector) product

A Baybz azbyazbx axbzaxby aybx

× =−−−

FHGG

IKJJ = −

FHGG

IKJJ

01

1 (a vector)

Making a drawing helps.

59

In order to give a statistical meaning to Onsager’s regression hypothesis we need to know about correlations of spontaneous fluctuations. This can be done using the mathematical concept of correlation functions. Correlation functions In any equilibrium system there are spontaneous (natural) microscopic fluctuations. In fact, this is what makes the world interesting. If everything would always remain at complete equilibrium without spontaneous changes, the world would be a boring place to be in. And would be a less interesting from a scientific point of view. These fluctuations drive systems towards ever increasing chaos and the eventual demise of any structure. This includes our earth! However, the time scale for this to happen is very long and the destruction of the world by natural (spontaneous) fluctuations is not something to worry about. There are more important issues to worry about as a human and a normal human being. Think of a glass of water at room temperature (TRC). Due to spontaneous microscopic fluctuations

60

molecules 5 – 100 might have a slightly higher temperature T=TRC + t, whereas molecules 600 – 695, might have a slighty lower temperature, T=TRC – t. The system temperature is (of course) T=TRC. We are talking about classical systems, so first a little refresher about molecular dynamics….. The instantaneous deviation or fluctuation in the dynamical property A t( ) from its time-independent equilibrium average A , is denote by δA t( ) : δA t A t A( ) ( )= − Microscopic laws govern its time evolution, and for classical systems we write δ δ δA t A t r p A r t p tN N N N( ) ( ; , ) [ ( ), ( )]= = The equilibrium average of spontaneous fluctuations is uninteresting, δA = 0. Figure The correlation between δA t( ) and an instantaneous or spontaneous fluctuation at time zero is

61

C t A A t A A t A( ) ( ) ( ) ( ) ( )= = −δ δ0 0 2 (notice that this is a multiplication between scalars) The averaging is to be performed over the initial conditions. C t dr dp f r p A r p A t r pN N N N N N N N( ) ( , ) ( ; , ) ( ; , )= z δ δ0 where f r pN N( , ) is the equilibrium space distribution function. In an equilibrium system, the correlations at different times depend upon the separation of the times only and not the absolute value of the time. Remember independent events – no memory of initial conditions after some time. C t A t A t( ) ( ' ) ( ' ' )= δ δ and t t t= −' ' ' Figure At small times, C t A A A( ) ( ) ( ) ( )= =δ δ δ0 0 2

62

At large times δA t( ) will become uncorrelated to δA( )0 , thus C t A A t t( ) ( ) ( ) ,→ → ∞δ δ0 as and since δA = 0, C t t( ) ,→ → ∞0 as Using the ergodic principle the average can be expressed in a different way. Imagine the behaviour of A as a function of time during a long trajectory (see figure). Then there will be an infinite number of pairs for which we calculate the correlation between the values for δA at two times t t' ' ' and , where t t t= −' ' ' . Since there are an infinite number of pairs, we can average over them. From ergodic principle it follows that this average will be the same as averaging over the ensemble of initial conditions for short trajectories (each of duration t). We may write

δ δτ

δ δτ

τA A t dT A T t A T t( ) ( ) lim ( ' ) ( '' )0

10

= + +→∞

z

63

here τ represents the time period of the long trajectory. The limit τ → ∞ emphasises that the system must be observed for a long enough time so that all the phase space can be properly sampled by the single trajectory. Examples of what correlation functions look like: Velocity autocorrelation for a simple atomic fluid:

C t v v t v v tx x( ) ( ) ( ) ( ) ( )= = •013

0

v tx ( ) is the x-component of the velocity of a tagged particle in the fluid, and v v t( ) ( )0 and are vectors.

C t u u t u u tz z( ) ( ) ( ) ( ) ( )= = •013

0

where u t( ) is the unit vector along the principle axis of a tagged particle. Figures

64

Correlation functions can be used to calculate transport properties, through Green-Kubo formulas. We start form Einstein’s formula ∆R t Dt2 6( ) = This eventually leads the

D dt v v t= •∞z1

30

0

( ) ( )

where v v t( ) ( )0 and are vectors. In this equation a transport coefficient is related to an integral of an autocorrelation functions.

65

Why do we want to use Onsager’s regression hypothesis again? Onsager’s principle states that in the linear regime, the relaxation obeys ∆∆

A tA

C tC

( )( )

( )( )0 0

=

where ∆ A t A t A A t( ) ( ) ( )= − = δ (Macroscopic) and C t A A t( ) ( ) ( )= δ δ0 (Microscopic) Be aware that Onsager’s principle is also known as the regression hypothesis and the fluctuation dissipation theorem. Let us regard Onsager’s principle as a postulate and look at an application of the principle. Consider a simple chemical reaction:

A? B

66

where A and B are the species in our system. Let cA(t) and cB(t) are the observed concentration of species A and B. Or we could look at isomerisation for n-butane as was done by David Chandler (J. Chem. Phys. 1979). Isomerisation of n-butane Or

Water ? Ice

?

67

Returning to the reaction kinetics; reasonable phenomenological rate equations for A ? B are dcdt

k c t k c tABA A AB B= − +( ) ( )

and dcdt

k c t k c tBBA A AB B= −( ) ( )

where kBA and kAB are the forward and backward rate constants, respectively. Note, that we take c t c tA B( ) ( )+ = constant in this model. Also note that the equilibrium concentrations, c cA B and must obey the

detailed balance condition

0 = − +k c k cAB A BA B From which it immediately follows that K c c k keq B A BA AB≡ =/ /b g a f The solutions to the rate equations yield

68

∆ ∆c t c t c c tA A A A rxn( ) ( ) ( )exp /= − = −0 τa f where τ rxn AB BAk k− = +1 . Now let us turn the regression hypothesis into the language of statistical mechanics. So, that we will be able to compute using the microscopic laws and ensemble averages. Let us assume that nA is the dynamical value for which n t c tA A( ) ( )∝ Then, according to the regression hypothesis ∆ ∆c t c n n t nA A A A A( ) / ( ) ( ) ( ) / ( )0 0 2= δ δ δ

As a result we may write exp( / ) ( ) ( ) / ( )− =t n n t nrxn A A Aτ δ δ δ0 2

This is a good result, the left-hand side contains a rate constant τ rxn , which is macroscopic, while the right-hand side is completely defined in terms of microscopic variables and ensemble averages.

69

Thus, we have used the regression hypothesis to obtain a method by which a rate constant can be calculated from microscopic laws. The tool we can use for this is molecular dynamics simulation. Before we can start calculating there are 2 difficulties: 1. How to choose dynamically variable nA? 2. Obtaining δ δn n tA A( ) ( )0 is a formidable task. nA must be chosen in such a manner that the reaction can be described by a single coordinate q. q is called the reaction coordinate. This is shown (in the figure below) that we can divide the potential energy ‘surface’ in regions, which indicate species A and B. Please note, that this potential energy ‘surface’ is different from the potential energy surface that we discussed earlier in the course.

70

Earlier in the course, we constraint our system to move on the surface of the potential energy due to the external constraints. Figure. Schematic picture of the potential energy surface as a function of the reaction coordinate.

The species are then identified according to: q q< * corresponds to species A q q= *provides a dividing surfaces between the two species. q q> * corresponds to species B

A B

q

V(q)

q*

71

So, we are in effect going to study barrier crossing, from A to B, via a barrier. Such as crossing over the barrier in calix[4]arene, (W.K. den Otter, PhD thesis 1998). Example, there is a straightforward application of for gas molecule passing through a water film, (M. in het Panhuis, PhD thesis 1998). -------------------------------------------------------------- In principle this can also be applied to more elaborate reactions

A + B ? C

O2 + C ? CO2 The rate equations then get additional terms.

72

So, in order to be able to calculate these for a classical system we must obtain the classical analog. Equipartion Distribution functions The Grand Canonical Ensemble Page 70 Chandler Not sure whether to include this. The canonical variables are assumed to follow classical Hamiltonian dynamics:

.

jHq

pj

= −∂∂

(4.2)

.

jHp

qj

=∂∂

(4.3)

for j = 1, 2, …, 3N.

73

Systems at Fixed Temperature: Canonical Ensemble This more like NVT – see also Chandler for this. NVE and NVT ensemble NVE – microcanonical (normal, standard) NVE = NVT under some conditions.

74