p211lec1

3
Notes for Physics 211, University of California, Berkeley Week I: We covered mostly Reif 2.1-2.4 plus a few things (below). The random walk and Gaussian examples are discussed in detail in Reif chapter 1. 1 Lecture I : Examples of successes of statistical physics: CMB (noninteracting quantum bosons) at 2.725 K discovered in 1964 by Penzias and Wilson, Ising universality in 3D (classical interacting), supercon- ductivity and AC Josephson (quantum interacting) measurement of h/(2e)=2.067833636 × 10 -15 Weber. Basic definitions: a “microstate” is a single microscopic state of the system, i.e., a “pure state” in quantum mechanics (this notion requires either quantum mechanics or careful treatment of phase space volume in classical mechanics). A “state of knowledge” represents a system in terms of the probabilities for it to be in different microstates. Basic postulate of equilibrium statistical physics: a priori probabilities are equal for any two microstates with the same values of the conserved quantities (usually energy is the only conserved quantity we care about). We will justify this postulate classically a bit later. It is easier to understand in quantum mechanics: given a degenerate set of energy eigenstates and no further information about them, we might as well choose the density matrix proportional to the identity matrix, which is manifestly basis-independent. In classical physics, it is not as obvious how to count microstates for a continuum system, but quantum mechanics resolves this problem: quantum mechanics in finite volume gives us a discrete spectrum of states. Our description of a general statistical mechanical system will be as a quantum-mechanical density matrix or “mixed state”. A density matrix expresses the state of a system as a probabilistic sum over one or more pure states (i.e., “wavefunctions” or solutions of the Schr¨ odinger equation), ρ = X i p i |ψ i ihψ i |. (1) The above postulate is a statement about what density matrices describe equilibria. Examples of density matrices (see a QM textbook for more review). 2 Lecture II Let’s look at a quick but important example to get an idea of what a microstate means. Consider N independent “coin flips” or distinguishable spin-1/2 degrees of freedom. Each can be up or down along some axis with probability 1/2. A microstate is specified by stating whether each spin is up or down. We are interested in the distribution of the total magnetization M = N - N . Another way to visualize this problem is as a random walk on the real line. A step right corresponds to an up spin, a step left corresponds to a down spin, and we can view a state of spins as a “history” of a random walk. (For this, our treatment closely followed that of Reif.) A quick calculation shows that at each step, the variance of the position increases by 1, so that σ 2 = hM 2 i-hM i 2 = N . The probability distribution of final M is a Gaussian centered on M =0 with variance N , as we now show. 1

description

Stat Mech Course notes

Transcript of p211lec1

Page 1: p211lec1

Notes for Physics 211, University of California, BerkeleyWeek I: We covered mostly Reif 2.1-2.4 plus a few things (below). The random walk and

Gaussian examples are discussed in detail in Reif chapter 1.

1 Lecture I

:Examples of successes of statistical physics: CMB (noninteracting quantum bosons) at 2.725 K

discovered in 1964 by Penzias and Wilson, Ising universality in 3D (classical interacting), supercon-ductivity and AC Josephson (quantum interacting) measurement of h/(2e) = 2.067833636× 10−15

Weber.Basic definitions: a “microstate” is a single microscopic state of the system, i.e., a “pure state”

in quantum mechanics (this notion requires either quantum mechanics or careful treatment of phasespace volume in classical mechanics). A “state of knowledge” represents a system in terms of theprobabilities for it to be in different microstates.

Basic postulate of equilibrium statistical physics: a priori probabilities are equal for any twomicrostates with the same values of the conserved quantities (usually energy is the only conservedquantity we care about).

We will justify this postulate classically a bit later. It is easier to understand in quantummechanics: given a degenerate set of energy eigenstates and no further information about them,we might as well choose the density matrix proportional to the identity matrix, which is manifestlybasis-independent. In classical physics, it is not as obvious how to count microstates for a continuumsystem, but quantum mechanics resolves this problem: quantum mechanics in finite volume givesus a discrete spectrum of states.

Our description of a general statistical mechanical system will be as a quantum-mechanicaldensity matrix or “mixed state”. A density matrix expresses the state of a system as a probabilisticsum over one or more pure states (i.e., “wavefunctions” or solutions of the Schrodinger equation),

ρ =∑

i

pi|ψi〉〈ψi|. (1)

The above postulate is a statement about what density matrices describe equilibria. Examples ofdensity matrices (see a QM textbook for more review).

2 Lecture II

Let’s look at a quick but important example to get an idea of what a microstate means. ConsiderN independent “coin flips” or distinguishable spin-1/2 degrees of freedom. Each can be up or downalong some axis with probability 1/2. A microstate is specified by stating whether each spin is upor down. We are interested in the distribution of the total magnetization M = N↑ −N↓.

Another way to visualize this problem is as a random walk on the real line. A step rightcorresponds to an up spin, a step left corresponds to a down spin, and we can view a state ofspins as a “history” of a random walk. (For this, our treatment closely followed that of Reif.)A quick calculation shows that at each step, the variance of the position increases by 1, so thatσ2 = 〈M2〉 − 〈M〉2 = N . The probability distribution of final M is a Gaussian centered on M = 0with variance N , as we now show.

1

Page 2: p211lec1

To compute this, note that the number of configurations is equal to the number of ways tochoose which spins are up, NCN↑ =

(NN↑

)=(

NN↓

)N !

m!(N−m)! . For large N and for N↑ near N/2,where this function is maximized, this is quite a large number. For a large number k, we can useStirling’s approximation. Most of the time you can get away with knowing (here log = naturallogarithm)

log k! ≈ k log k − k, (2)

but here it is useful to have the more precise version is that

limk→∞

k!√2πk(k/e)k

= 1. (3)

We are interested in knowing how the function behaves near its maximum. The total numberof microstates is 2N . The fraction with magnetization M = 2N↑ −N is

2−N

(N

N↑

)≈ 2−N

√2πN(N/e)N√

2πN↑(N↑/e)N↑(N↓/e)N↓. (4)

All the powers of e cancel out. The algebraic parts give a prefactor 1/√πN/2, and we still have the

2−N left. If we define x = N↑/N , which ranges from 0 to 1, then the logarithm of the remainder is

f(x) = N logN −Nx log(Nx)−N(1− x) log(N(1− x)) = −N [x log x+ (1− x) log(1− x)] . (5)

While the function in brackets may look a little unfamiliar, we will give an interpretation of it in amoment. For now, we just need to expand it near x = 1/2, which is the maximum of the probabilitydistribution. The result is

f(x) = N(log 2− 2x(x− 1/2)− 2(1− x)(1/2− x)) = N log 2− 2N(x− 1/2)2. (6)

On re-exponentiating, the N log 2 just cancels the 2−N we had before. So what is left is

1√πN/2

e−2N(x−1/2)2 . (7)

Converting back to the original quantity N↑, this is a probability distribution

P (N↑) =e−2(N↑−N/2)2/N√

πN/2. (8)

This is a Gaussian with variance σ2 = N/4 for N↑. The magnetization M = 2N↑ − N thus hasvariance N .

We can make these numbers more physical by introducing a magnetic field. Let the energy behM for some magnetic field h. Then we have just calculated the number of states available at thepossible values of energy E = hM (note that M only takes on even integer values for total N even,and odd for total N odd):

Ω(E) = 2NP (N↑) = 2NP ((M +N)/2) = 2NP (E/(2h) +N/2). (9)

2

Page 3: p211lec1

Explicitly, this is

Ω(E) =2N√πN/2

e−E2/(2h2N). (10)

The entropy is defined asS(E) = kB log Ω(E). (11)

For historical reasons we define the entropy to include a dimensionful factor kB = 1.38×10−23J/K.A more logical definition would set kB = 1.

The microcanonical ensemble is obtained from applying our equilibrium postulate to a systemwith a well-defined energy E. Then the probability of a state to appear is 1/Ω(E) if the state hasenergy E, and zero otherwise.

We frequently assume that there are so many particles that the energy level spectrum is effec-tively continuous compared to our energy resolution. It is then natural to ask about the number ofstates in an interval δE that is small compared to the maximum and minimum possible energies.

To close this lecture, let us give an entropic interpretation of the function f(x) above. Weobtained this function f(x) = −N [x log x− (1− x) log(1− x)] in the process of computing thelogarithm of the number of states with Nx up spins and N(1 − x) down spins. The function ismaximized when x = 1/2 and vanishes at x = 0 or x = 1.

Suppose that instead I had several different spin states with numbers n1, n2, . . . ns of each.Since these must sum to N and be nonnegative, I can replace them by probabilities pi such thatni = piN . Then the log of the number of states is approximately

logN !

(p1N)!(p2N)! . . . (psN)!≈ N logN − p1N log p1N − . . .− psN log psN = −N

∑i

pi log pi. (12)

Here we are ignoring the algebraic part in Stirling’s formula because for this calculation we don’tcare about getting the normalization right.

This motivates the fundamental definition of the entropy as

S(ρ) = −Trρ log ρ. (13)

In a basis that makes ρ diagonal, this gives exactly the formula above, and you can check that itis basis-independent by considering a transformation ρ→ U−1ρU and a power-series expansion ofthe log. Physicists usually use natural log here, where computer scientists use log base 2 so thatthe entropy is in bits.

(What does it mean to take the logarithm of a density matrix? Well, one can define it by apower series, as is done for the exponential of the matrix. If you’re worried about convergence, notethat for density matrices there is a basis where the density matrix is diagonal and nonnegative andthat the entropy is basis-independent.)

3