StM Course Notes

From PhysWiki
Jump to: navigation, search

Here is a conglomerate of notes gathered for the graduate StM class 215A. For best results, work through this list with pencil and paper.




Potential for Canonical Ensemble:
F= U-TS = -kT\ln(Z)
Partition function distinctions:

One can dist \[U = A+TS = \frac{3}{2}\,NkT-\frac{a'N^2}{V}.\]Van inguish for distinguishable/indistinguishable particles in discrete/continuous systems with interacting/non-interacting and identical/nonidentical subsystems.

If a system consists of N non-interacting, nonidentical subsystems: \[ Z_{tot} = \prod_{j=1}^N =Z_{rot}Z_{vib} Z_{transl} \]

If components are noninteracting, identical but are distinguishable, then \[Z_{tot} = \left(Z_1\right)^N\]

If they are noninteracting, identical and indistinguishable \[Z_{tot} = \frac{1}{N!}\left(Z_1\right)^N\]

For discrete systems (e.g. two fermions in two-state system): \[ Z = \sum_{states} g_s \cdot \mathrm{e}^{- \beta E_s} \]

Generally, \[ Z_{tot} = \eta \iint \frac{\mathrm{d}^{3N}p \mathrm{d}^{3N}q}{h^{3N}} e^{-\beta H} \text{ , where } \eta = \begin{cases} 1 & \text{distinguishable }\\ \frac{1}{N!} & \text{indistinguishable.} \end{cases} \]

Qualitative distinction between ensembles:

From Wikipedia: Microcanonical ensemble or NVE ensemble—a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.

Canonical ensemble or NVT ensemble—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium the system must remain totally closed (unable to exchange particles with its environment), and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.

Grand canonical ensemble or µVT ensemble—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.

Particle Distributions

$n(\epsilon)$ and $Z_1$:
n(\epsilon)_{MB} = e^{-\beta (\epsilon_n - \mu)} \\

n(\epsilon)_{FD}= \frac{1}{e^{\beta(\epsilon_n - \mu)} +1} \\

n(\epsilon)_{BE} = \frac{1}{e^{\beta(\epsilon_n - \mu)} - 1} \\

Z_1 = \sum_{n=0}^{\infty} e^{-\beta\epsilon_n}  


Describe density of states for gas in a box:

From Wikipedia: For both massive and massless particles in a box, the states of a particle are enumerated by a set of quantum numbers [nxnynz]. The magnitude of the momentum is given by

\[p=\frac{h}{2L}\sqrt{n_x^2+n_y^2+n_z^2} \qquad \qquad n_x,n_y,n_z=1,2,3,\ldots \]

where h is Planck's constant and L is the length of a side of the box. Each possible state of a particle can be thought of as a point on a 3-dimensional grid of positive integers. The distance from the origin to any point will be


Suppose each set of quantum numbers specify states where is the number of internal degrees of freedom of the particle that can be altered by collision. For example, a spin 1/2 particle would have f=2, one for each spin state. For large values of n , the number of states with magnitude of momentum less than or equal to p from the above equation is approximately

\[ g=\left(\frac{f}{8}\right) \frac{4}{3}\pi n^3 = \frac{4\pi f}{3} \left(\frac{Lp}{h}\right)^3 \]

which is just times the volume of a sphere of radius divided by eight since only the octant with positive n is considered. Using a continuum approximation, the number of states with magnitude of momentum between and p+dp  is therefore

\[ dg=\frac{\pi}{2}~f n^2\,dn = \frac{4\pi fV}{h^3}~ p^2\,dp \]

where V=L3  is the volume of the box. Notice that in using this continuum approximation, the ability to characterize the low-energy states is lost, including the ground state where n=1. For most cases this will not be a problem, but when considering Bose–Einstein condensation, in which a large portion of the gas is in or near the ground state, the ability to deal with low energy states becomes important.

Without using the continuum approximation, the number of particles with energy εi is given by

\[ N_i = \frac{g_i}{\Phi(\epsilon_i)} \]


\(\! g_i\),   degeneracy of state i
\(\Phi(\epsilon_i) = \begin{cases} e^{\beta(\epsilon_i-\mu)}, & \mbox{for particles obeying Maxwell-Boltzmann statistics } \\ e^{\beta(\epsilon_i-\mu)}-1, & \mbox{for particles obeying Bose-Einstein statistics}\\ e^{\beta(\epsilon_i-\mu)}+1, & \mbox{for particles obeying Fermi-Dirac statistics}\\ \end{cases}\)
with β = 1/kT , Boltzmann's constant k, temperature T, and chemical potential μ .
(See Maxwell–Boltzmann statistics, Bose–Einstein statistics, and Fermi–Dirac statistics.)

Using the continuum approximation, the number of particles dNE  with energy between E  and E+dE  is:

\[dN_E= \frac{dg_E}{\Phi(E)} \]

where \(\!dg_E\)  is the number of states with energy between E  and E+dE .


Ratio of heat capacities for ideal gases:
\[ \gamma = \frac{C_P}{C_V} = \frac{c_P}{c_V} = \frac{H}{U}\]\[ C_P = \frac{\gamma n R}{\gamma - 1} \qquad \mbox{and} \qquad C_V = \frac{n R}{\gamma - 1}\]\[C_V = C_P - nR\]

Ideal Gas Equation:
\[PV= NkT = nRT\], where \(nR = Nk\), so \(R= N_A k\)\[E = \frac{3}{2} NkT\]\[S= Nk\mathrm{ln}\left(\frac{V}{N\lambda^3}\right)+ \frac{5}{2} Nk\], where \[\lambda =\left(\frac{2\pi \hbar^2}{mkT}\right)^{\frac{1}{2}}\]
Boltzmann's constant, given by the universal gas constant R and $N_A$:

Van der Waal Gas Equation:
\[(P+ \frac{n^2a}{V^2})(V-nb) = nRT \]

From Wikipedia: The equation uses the following state variables: the pressure of the fluid p, total volume of the container containing the fluid V, number of moles n, and absolute temperature of the system T.

One form of the equation is

\[\left(p + \frac{a'}{v^2}\right)\left(v-b'\right) = kT\]



is the volume of the container shared between each particle (not the velocity of a particle),

\[N=N_A n\]

is the total number of particles, and


is Boltzmann's constant, given by the universal gas constant R and Avogadro's constant NA.

Extra parameters are introduced: a' is a measure for the attraction between the particles, and b' is the average volume excluded from v by a particle.

The equation can be cast into the better known form

\[\left(p + \frac{n^2 a}{V^2}\right)\left(V-nb\right) = nRT\]


\[a = N_A^2a' \]

is a measure of the attraction between the particles,

\[b = N_A b' \]

is the volume excluded by a mole of particles.

A careful distinction must be drawn between the volume available to a particle and the volume of a particle. In particular, in the first equation v refers to the empty space available per particle. That is, v is the volume V of the container divided by the total number nNA of particles. The parameter b', on the other hand, is proportional to the proper volume of a single particle - the volume bounded by the atomic radius. This is the volume to be subtracted from v because of the space taken up by one particle. In van der Waals' original derivation, given below, b' is four times the proper volume of the particle. Observe further that the pressure p goes to infinity when the container is completely filled with particles so that there is no void space left for the particles to move. This occurs when V = nb.

Van der Waal internal energy:
\[U = F+TS = \frac{3}{2}\,NkT-\frac{a}{V}.\]

See wiki for details.

Specific Heat Capacity:

The internal energy of a closed system changes either by adding heat to the system or by the system performing work. Written mathematically we have \[ \mathrm{d}U = \delta Q + \delta W \]. For work as a result of an increase of the system volume we may write, \[ \mathrm{d}U = \delta Q - P\mathrm{d}V \]. If the heat is added at constant volume, then the second term of this relation vanishes and one readily obtains \[\left(\frac{\partial U}{\partial T}\right)_V=\left(\frac{\partial Q}{\partial T}\right)_V=C_V\]. This defines the heat capacity at constant volume, CV. Another useful quantity is the heat capacity at constant pressure, CP. The enthalpy of the system is given by \[ H = U + PV \]. A small change in the enthalpy can be expressed as \[ \mathrm{d}H = \delta Q + V \mathrm{d}P \], and therefore, at constant pressure, we have \[\left(\frac{\partial H}{\partial T}\right)_P=\left(\frac{\partial Q}{\partial T}\right)_P=C_P\].

Equipartition breakdown:

From Wikipedia:

To illustrate the breakdown of equipartition, consider the average energy in a single (quantum) harmonic oscillator, which was discussed above for the classical case. Neglecting the irrelevant zero-point energy term, its quantum energy levels are given by En = nhν, where h is the Planck constant, ν is the fundamental frequency of the oscillator, and n is an integer. The probability of a given energy level being populated in the canonical ensemble is given by its Boltzmann factor

\[ P(E_{n}) = \frac{e^{-n\beta h\nu}}{Z}, \]

where β = 1/kBT and the denominator Z is the partition function, here a geometric series

\[ Z = \sum_{n=0}^{\infty} e^{-n\beta h\nu} = \frac{1}{1 - e^{-\beta h\nu}}. \]

Its average energy is given by

\[ \langle H \rangle = \sum_{n=0}^{\infty} E_{n} P(E_{n}) = \frac{1}{Z} \sum_{n=0}^{\infty} nh\nu \ e^{-n\beta h\nu} = -\frac{1}{Z} \frac{\partial Z}{\partial \beta} = -\frac{\partial \log Z}{\partial \beta}. \]

Substituting the formula for Z gives the final result

\[ \langle H \rangle = h\nu \frac{e^{-\beta h\nu}}{1 - e^{-\beta h\nu}}. \]

At high temperatures, when the thermal energy kBT is much greater than the spacing between energy levels, the exponential argument βhν is much less than one and the average energy becomes kBT, in agreement with the equipartition theorem (Figure 10). However, at low temperatures, when  >> kBT, the average energy goes to zero—the higher-frequency energy levels are "frozen out" (Figure 10). As another example, the internal excited electronic states of a hydrogen atom do not contribute to its specific heat as a gas at room temperature, since the thermal energy kBT (roughly 0.025 eV) is much smaller than the spacing between the lowest and next higher electronic energy levels (roughly 10 eV).

Similar considerations apply whenever the energy level spacing is much larger than the thermal energy. For example, this reasoning was used by Max Plank and Albert Einstein to resolve the ultraviolet catastrophe of blackbody radiation. The paradox arises because there are an infinite number of independent modes of the electromagnetic field in a closed container, each of which may be treated as a harmonic oscillator. If each electromagnetic mode were to have an average energy kBT, there would be an infinite amount of energy in the container. However, by the reasoning above, the average energy in the higher-frequency modes goes to zero as ν goes to infinity; moreover, Planck's law of black body radiation, which describes the experimental distribution of energy in the modes, follows from the same reasoning.

Maxwell relations:

From Wikipedia:

\[ \left(\frac{\partial \mu}{\partial P}\right)_{S, N} = \left(\frac{\partial V}{\partial N}\right)_{S, P}\qquad= \frac{\partial^2 H }{\partial P \partial N} \]

where μ is the chemical potential. Each equation can be re-expressed using the relationship

\[\left(\frac{\partial y}{\partial x}\right)_z = \frac{1}{\left(\frac{\partial x}{\partial y}\right)_z}.\]

Common uses with corresponding thermodynamic potentials: \begin{align} +\left(\frac{\partial T}{\partial V}\right)_S &=& -\left(\frac{\partial P}{\partial S}\right)_V &=& \frac{\partial^2 U }{\partial S \partial V}\\ +\left(\frac{\partial T}{\partial P}\right)_S &=& +\left(\frac{\partial V}{\partial S}\right)_P &=& \frac{\partial^2 H }{\partial S \partial P}\\ +\left(\frac{\partial S}{\partial V}\right)_T &=& +\left(\frac{\partial P}{\partial T}\right)_V &=& -\frac{\partial^2 A }{\partial T \partial V}\\ -\left(\frac{\partial S}{\partial P}\right)_T &=& +\left(\frac{\partial V}{\partial T}\right)_P &=& \frac{\partial^2 G }{\partial T \partial P} \end{align}

Partition Function (unabridged)

From Wikipedia: The canonical partition function is

\[ Z = \sum_{s} \mathrm{e}^{- \beta E_s}\] ,

where the "inverse temperature", β, is conventionally defined as

\[\beta \equiv \frac{1}{k_BT}\]

with kB denoting Boltzmann's constant. The exponential factor exp(−βEs) is known as the Boltzmann factor. (For a detailed derivation of this result, see canonical ensemble). In systems with multiple quantum states s sharing the same Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j ) as follows:

\[ Z = \sum_{j} g_j \cdot \mathrm{e}^{- \beta E_j}\],

where gj is the degeneracy factor, or number of quantum states s which have the same energy level defined by Ej = Es.

The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In classical statistical mechanics, it is not really correct to express the partition function as a sum of discrete terms, as we have done. In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In this case we must describe the partition function using an integral rather than a sum. For instance, the partition function of a gas of N identical classical particles is

\[Z=\frac{1}{N! h^{3N}} \int \, \exp[-\beta H(p_1 \cdots p_N, x_1 \cdots x_N)] \; d^3p_1 \cdots d^3p_N \, d^3x_1 \cdots d^3x_N \]


pi indicate particle momenta
xi indicate particle positions
d3 is a shorthand notation serving as a reminder that the pi and xi are vectors in three dimensional space, and
H is the classical Hamiltonian.

The reason for the factorial factor N! is discussed below. For simplicity, we will use the discrete form of the partition function in this article. Our results will apply equally well to the continuous form. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. To make it into a dimensionless quantity, we must divide it by h3N where h is some quantity with units of action (usually taken to be Planck's constant).

In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):

\[Z=\operatorname{tr} ( \mathrm{e}^{-\beta\hat{H}} )\] ,

where Ĥ is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series. The classical form of Z is recovered when the trace is expressed in terms of coherent states [1] and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, one inserts under the trace for each degree of freedom the identity: \[ \boldsymbol{1} = \int |x,p\rangle\,\langle x,p|~\frac{ dx\, dp}{h} \] where Template:!x, pTemplate:Rangle is a normalised Gaussian wavepacket centered at position x and momentum p. Thus, \[ Z = \int \operatorname{tr} \left( \mathrm{e}^{-\beta\hat{H}} |x,p\rangle\,\langle x,p| \right) \frac{ dx\, dp}{h} = \int\langle x,p| \mathrm{e} ^{-\beta\hat{H}}|x,p\rangle ~\frac{ dx\, dp}{h} \] A coherent state is an approximate eigenstate of both operators \( \hat{x} \) and \( \hat{p} \), hence also of the Hamiltonian Ĥ, with errors of the size of the uncertainties. If Δx and Δp can be regarded as zero, the action of Ĥ reduces to multiplication by the classical Hamiltonian, and Z reduces to the classical configuration integral.

Meaning and significance

It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, let us consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.

The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is

\[P_s = \frac{1}{Z} \mathrm{e}^{- \beta E_s}. \]

The partition function thus plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:

\[\sum_s P_s = \frac{1}{Z} \sum_s \mathrm{e}^{- \beta E_s} = \frac{1}{Z} Z = 1. \]

This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. The letter Z stands for the German word Zustandssumme, "sum over states". This notation also implies another important meaning of the partition function of a system: it counts the (weighted) number of states a system can occupy. Hence if all states are equally probable (equal energies) the partition function is the total number of possible states. Often this is the practical importance of Z.

Calculating the thermodynamic total energy

In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:

\[\langle E \rangle = \sum_s E_s P_s = \frac{1}{Z} \sum_s E_s e^{- \beta E_s} = - \frac{1}{Z} \frac{\partial}{\partial \beta} Z(\beta, E_1, E_2, \cdots) = - \frac{\partial \ln Z}{\partial \beta} \]

or, equivalently,

\[\langle E\rangle = k_B T^2 \frac{\partial \ln Z}{\partial T}.\]

Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner

\[E_s = E_s^{(0)} + \lambda A_s \qquad \mbox{for all}\; s \]

then the expected value of A is

\[\langle A\rangle = \sum_s A_s P_s = -\frac{1}{\beta} \frac{\partial}{\partial\lambda} \ln Z(\beta,\lambda).\]

This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.

Relation to thermodynamic variables

In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.

As we have already seen, the thermodynamic energy is

\[\langle E \rangle = - \frac{\partial \ln Z}{\partial \beta}.\]

The variance in the energy (or "energy fluctuation") is

\[\langle (\Delta E)^2 \rangle \equiv \langle (E - \langle E\rangle)^2 \rangle = \frac{\partial^2 \ln Z}{\partial \beta^2}.\]

The heat capacity is

\[C_v = \frac{\partial \langle E\rangle}{\partial T} = \frac{1}{k_B T^2} \langle (\Delta E)^2 \rangle.\]

The entropy is

\[S \equiv -k_B\sum_s P_s\ln P_s= k_B (\ln Z + \beta \langle E\rangle)=\frac{\partial}{\partial T}(k_B T \ln Z) =-\frac{\partial A}{\partial T}\]

where A is the Helmholtz free energy defined as A = UTS, where U = Template:LangleETemplate:Rangle is the total energy and S is the entropy, so that

\[A = \langle E\rangle -TS= - k_B T \ln Z.\]

Partition functions of subsystems

Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:

\[Z =\prod_{j=1}^{N} \zeta_j.\]

If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case

\[Z = \zeta^N.\]

However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):

\[Z = \frac{\zeta^N}{N!}.\]

This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.

Grand canonical partition function Template:Main

We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.

The grand canonical partition function, denoted by \(\mathcal{Z}\), is the following sum over microstates \[ \mathcal{Z}(\mu, V, T) = \sum_{i} \exp((N_i\mu - E_i)/k_B T). \] Here, each microstate is labelled by \(i\), and has total particle number \(N_i\) and total energy \(E_i\). This partition function is closely related to the Grand potential, \(\Phi_{\rm G}\), by the relation \[ -k_B T \ln \mathcal{Z} = \Phi_{\rm G} = \langle E \rangle - TS - \mu \langle N\rangle. \] This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.

It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state \(i\): \[ p_i = \frac{1}{\mathcal Z} \exp((N_i\mu - E_i)/k_B T) .\]

An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi-Dirac statistics for fermions, Bose-Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.

General Identities

Table of Thermodynamics Equations


Given: \[n_i = \frac{1}{e^{\beta (\epsilon_i -\mu)} + \eta} \\ S= \sum_{i=1}^{N} S_i = \sum_{i=1}^{N} -kG_i \left[n_i \mathrm{ln}(n_i) - \eta(1-\eta n_i)\mathrm{ln}(1-\eta n_i)\right] \\ N = \sum_{i} n_i G_i \text{ and } E = \sum_{i} \epsilon_i n_i G_i \\ \Phi_{G.-C.} = E- TS- \mu N =-kT\mathrm{ln}(Z) \] Find: \[Z_{FD} = \prod_{i} \left( 1 + e^{-\beta \epsilon_i}\right) \]\[\ Z_{BE} = \prod_{i} \left( \frac{1}{1 - e^{-\beta \epsilon_i}}\right) =\prod_{i} \left(\sum_{n=0}^{\infty} e^{-\beta \epsilon_i n}\right) \] in the grand canonical ensemble.

The method here is to rewrite $S$ such that we get the LHS of $\Phi_{G.-C.}$ proportional to a logarithm of the partition function.

Note for Lim #1009: A better solution may be found here (courtesy of Elisabeth Mills).



StM Course Notes by Prof. E. D'Hoker (with bookmarks).

Nobel Lecture by Ken Wilson.

Statistical Physics of Particles by Prof. M. Kardar.

Problems and Solutions for Statistical Mechanics by Yung-Kuo Lim.


Willard Gibbs, Ludwig Boltzmann, Max Planck, Enrico Fermi


While doing problems in research and learning, you waste 90% of the time being confused and going in the wrong directions. In fact, this is the time when you learn the most, so it is not wasted. The only time when you are not confused is when you are writing a paper about what you are no longer confused about. -- Eric D'Hoker


  1. J. R. Klauder, B.-S. Skagerstam, Coherent States --- Applications in Physics and Mathematical Physics, World Scientific, 1985, p. 71-73.
Personal tools