Formula Sheet

From PhysWiki
Revision as of 13:32, 21 June 2021 by Tom Neiser (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Here is a list of important equations that have been empirically found to be tested in UCLA's comprehensive exams. Feel free to add further equations for good practice (often, an equation can be copied from the corresponding open source wikipedia article). You may also link to good articles on Wikipedia or upload your own questions and solutions.


Contents

QM

Stern-Gerlach Experiment Animation

Fundamentals

Hermitian Operator Properties
 \[
H^\dagger = H \]\[ \langle a| H |b\rangle = \langle a | H^\dagger |b \rangle\]\[
H = \sum_0^\infty \lambda_n |n \rangle \langle n|= \text{diagonalized}\]

Define degeneracy of observable matrices vs. nondegeneracy:

Degeneracy occurs when operating on two different (orthogonal) eigenstates produces the same eigenvalues. Non-degeneracy implies that \[H|n\rangle = E_n |n\rangle\] is unique, i.e. that each energy eigenvalue has only one corresponding eigenfunction.

Heisenberg picture (properties, caveats):

Heisenberg Equation of motion means that $[x, H] \ne 0$, but rather \[ \frac{d x}{dt} = \frac{1}{i\hbar} [x, H] \]

Generator of Translation:

\[ e^{\frac{i pa }{\hbar }} x e^{\frac{-i pa }{\hbar }} = x+a \]

Standard Boundary Conditions for $\psi$:

1. $\psi$ is continuous over a boundary

2. $\frac{\partial \psi}{\partial x}$ is continuous over a boundary, except for delta function boundaries where this term goes to infinity; for delta boundaries split up $\psi(x)$ and take moment of SE: \[\lim_{\epsilon \to 0} \int_{a-\epsilon}^{a+\epsilon} \left( \frac{\partial^2 \psi(x) }{\partial x^2} - \frac{2m}{\hbar} \delta (x-a) \psi(x) \right) dx = \lim_{\epsilon \to 0} \int_{a-\epsilon}^{a+\epsilon} \frac{-2mE}{\hbar} \psi dx \Rightarrow \\ \Rightarrow \frac{\partial \psi_{I}}{\partial x} \mid_{x=a} -\frac{\partial \psi_{II}}{\partial x} \mid_{x=a} = \frac{2m}{\hbar} \psi(a)\]

Schroedinger Equation in a Central Potential:

SHO

Ladder operators:
 \[
      x = \sqrt{\frac{\hbar}{2m \omega}}  ( a^\dagger{} + a) \\
     \hat{p} = \sqrt{\frac{\hbar m\omega}{2}} ( + a^\dagger{} - a)i \\
      a^\dagger = \sqrt{\frac{m \omega}{2 \hbar }} ( x - i \frac{\hat{p}}{m \omega}) \\
      a  = \sqrt{\frac{m \omega }{2 \hbar }} (x + i \frac{\hat{p}}{m \omega})\\
\text{Note, that }a\text{ and } a^\dagger\text{ are Hermitian conjugates since x and p are Hermitian.}\\
a^\dagger |n\rangle = \sqrt{n+1} |n+1\rangle\\
a |n\rangle = \sqrt{n}|n-1\rangle\\
[a, a^\dagger] |n\rangle = 1 |n\rangle \\
[a^\dagger, a] |n\rangle=- 1 |n \rangle\\
a^\dagger a |n\rangle = N |n \rangle = n |n\rangle,\text{ where N is the number operator } N = a^\dagger a\\
\text{Together with the identity }[A, BC] = B [A, C] + [A, B] C \text{ we find: } \\ 
[N, a] = -a \\ 
[N, a^\dagger] = a^\dagger\\\\
H = \hbar \omega ( a^\dagger a + \frac{1}{2}) = \hbar \omega ( N + \frac{1}{2})\\
  \]

Spin Systems

Pauli Matrices:
\[
\sigma_x= \begin{pmatrix}
0&1\\
1&0
\end{pmatrix}
\\
\sigma_y= \begin{pmatrix}
0&-i\\
i&0
\end{pmatrix}
\\
\sigma_z = \begin{pmatrix} 
1&0\\
0&-1
\end{pmatrix}
\\\\
\]

Check:
\[ \det (\sigma_i) = -1,\\ Tr (\sigma_i) = 0 .\\ \sigma_i |i, \pm \rangle= ±1|i, \pm \rangle. \]

Eigenstates of pauli matrix operators:
\[
\begin{array}{lclc}
|+\rangle=                                          & \begin{bmatrix}{1}\\{0}\end{bmatrix}, & |-\rangle=                                          & \begin{bmatrix}{0}\\{1}\end{bmatrix}.\\
|x,+\rangle=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{bmatrix}{1}\\{1}\end{bmatrix}, & |x,-\rangle=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{bmatrix}{1}\\{-1}\end{bmatrix}, \\
|y,+\rangle=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{bmatrix}{1}\\{i}\end{bmatrix}, & |y,-\rangle=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{bmatrix}{1}\\{-i}\end{bmatrix}, \\
\end{array} \]

For a good demonstration of the Stern-Gerlach experiment, go to Wikipedia

Using Pauli matrices and $S_z$ eigenstates, operate $\sigma_{i}$ on $|\pm\rangle$:
\[
\sigma_{z}|\pm\rangle = \pm |\pm\rangle \\
\sigma_{x}|\pm\rangle =  |\mp\rangle \\
\sigma_{y}|\pm\rangle = \pm i |\mp\rangle
\]


For $(S.n) |n; \pm \rangle = \pm \frac{\hbar}{2} |n; \pm \rangle $, express $| n; \pm \rangle$ in $|z; \pm\rangle$ basis:
\[ 
|n ; +\rangle = \cos(\frac{\theta}{2}) e^{-i\phi} |+\rangle + \sin(\frac{\theta}{2}) |-\rangle =\begin{pmatrix} 
\cos(\frac{\theta}{2}) e^{-i\phi} \\
\sin(\frac{\theta}{2})
\end{pmatrix} \]\[
|n; - \rangle = \sin(\frac{\theta}{2}) e^{-i\phi}  |+ \rangle - \cos(\frac{\theta}{2}) |-\rangle =\begin{pmatrix} 
\sin(\frac{\theta}{2}) e^{-i\phi} \\
-\cos(\frac{\theta}{2})
\end{pmatrix} 
\]
Interaction picture wavefunction and operator:

An operator in the interaction picture is defined as Template:Equation box 1

Hamiltonian for spin-orbit interaction:
 \[
H_{SO} = - \vec{\mu} . \vec{B} = g \mu_\mathrm{B} \mathrm{B_i} \sigma_{i}= g \frac{e}{m_\mathrm{e}}\frac{\hbar}{2} \mathrm{B_i} \sigma_{i}
  \]
Wigner-Eckart Theorem selection rules and example:

DO A PROBLEM ON THIS.

Selection rules: \[ \langle n'j'm' | T_q^k | njm\rangle \ne 0 \] if \[ m' = q+m \text{ and }|j-k| \le j' \le j+k\]


Consider the position expectation value \(\langle njm|x|njm\rangle\). This matrix element is the expectation value of a Cartesian operator in a spherically-symmetric hydrogen-atom-eigenstate basis, which is a nontrivial problem. However, using the Wigner–Eckart theorem simplifies the problem. (In fact, we could obtain the solution quickly using parity, although a slightly longer route will be taken.)

We know that x is one component of r, which is a vector. Vectors are rank-1 tensors, so x is some linear combination of T1q for q = -1, 0, 1. In fact,

\[x=\frac{T_{-1}^{1}-T^1_1}{\sqrt{2}}\,,\]

where we defined the spherical tensors T10 = z and \[T^1_{\pm1}=\mp (x \pm i y)/{\sqrt{2}}\] (the pre-factors have to be chosen according to the definition of a spherical tensor of rank k. Hence, the T1q are only proportional to the ladder operators). Therefore \[\langle njm|x|n'j'm'\rangle = \langle njm|\frac{T_{-1}^{1}-T^1_1}{\sqrt{2}}|n'j'm'\rangle = \frac{1}{\sqrt{2}}\langle nj||T^1||n'j'\rangle (C^{jm}_{1(-1)j'm'}-C^{jm}_{11j'm'})\] The above expression gives us the matrix element for x in the \(|njm\rangle\) basis. To find the expectation value, we set n′ = n, j′ = j, and m′ = m. The selection rule for m′ and m is \(m\pm1=m'\) for the \(T_{\mp1}^{(1)}\) spherical tensors. As we have m′ = m, this makes the Clebsch-Gordan Coefficients zero, leading to the expectation value to be equal to zero.


Hydrogen Atom

Degeneracy of H-spectrum:

The energy eigenstates of the electron in the Hydrogen atom are $(2l = 1)$-fold degenerate: Since $H|nlm\rangle = E_{n, l} |nlm\rangle$ and $m \in [-l, ..., 0, ..., l]$, each $E_{n, l}$ can be produced by operation on $2l+1$ different states. Note that the $l=0$ state is not degenerate.

Landau Levels

Hamiltonian for system with magnetic and electric field along $\hat{\mathbf{z}}$:

\[\hat{H}=\frac{1}{2m}(\hat{\mathbf{p}}-q\hat{\mathbf{A}}/c)^2 + q \phi.\]

Perturbation Theory

Wigner-Eckart Theorem (W.E.T.) selection rules:

\[ \langle n'j'm' | T_q^k | njm\rangle \ne 0 \] if \[ m' = q+m \text{ and }|j-k| \le j' \le j+k\]

Use W.E.T. to find \(\langle njm|x|njm\rangle\) for Hydrogen:

From wikipedia: "This matrix element is the expectation value of a Cartesian operator in a spherically-symmetric hydrogen-atom-eigenstate basis, which is a nontrivial problem. However, using the Wigner–Eckart theorem simplifies the problem. (In fact, we could obtain the solution quickly using parity, although a slightly longer route will be taken.)

We know that x is one component of r, which is a vector. Vectors are rank-1 tensors, so x is some linear combination of T1q for q = -1, 0, 1. In fact,

\[x=\frac{T_{-1}^{1}-T^1_1}{\sqrt{2}}\,,\]

where we defined the spherical tensors T10 = z and \[T^1_{\pm1}=\mp \frac{x \pm i y}{\sqrt{2}}\] (the pre-factors have to be chosen according to the definition of a spherical tensor of rank k. Hence, the T1q are only proportional to the ladder operators). Therefore \[\langle njm|x|n'j'm'\rangle = \langle njm|\frac{T_{-1}^{1}-T^1_1}{\sqrt{2}}|n'j'm'\rangle = \frac{1}{\sqrt{2}}\langle nj||T^1||n'j'\rangle (C^{jm}_{1(-1)j'm'}-C^{jm}_{11j'm'})\] The above expression gives us the matrix element for x in the \(|njm\rangle\) basis. To find the expectation value, we set n′ = n, j′ = j, and m′ = m. The selection rule for m′ and m is \(m\pm1=m'\) for the \(T_{\mp1}^{(1)}\) spherical tensors. As we have m′ = m, this makes the Clebsch-Gordan Coefficients zero, leading to the expectation value to be equal to zero."

Non-degenerate t-indep. PT, first order energy and state:

\[E_n^{(1)} = \langle n|H_1|n\rangle\]\[ |n\rangle^{(1)} = \sum_{m \ne n} \frac{\langle m|H_1|n\rangle}{E_n-E_m} |m\rangle\]

Non-deg. t-indep. PT, second order energy:

\[ E_n^{(2)} = \sum_{n \ne m} \frac{|\langle m|H_1|n\rangle|^2}{E_n -E_m} \]

Degenerate t-indep. PT:

Diagonalize matrix.

Time-dep. PT:

If the unperturbed system is in eigenstate \(|j\rangle\) at time \(t = 0\,\), its state at subsequent times varies only by a phase (we are following the Schrödinger picture, where state vectors evolve in time and operators are constant):

\[ |j(t)\rangle = e^{-iE_j t /\hbar} |j\rangle \]

We now introduce a time-dependent perturbing Hamiltonian \(V(t)\,\). The Hamiltonian of the perturbed system is

\[ H = H_0 + V(t) \,\]

Let \(|\psi(t)\rangle\) denote the quantum state of the perturbed system at time t. It obeys the time-dependent Schrödinger equation,

\[ H |\psi(t)\rangle = i\hbar \frac{\partial}{\partial t} |\psi(t)\rangle\]

The quantum state at each instant can be expressed as a linear combination of the eigenbasis \({|n\rangle}\). We can write the linear combination as

\[ |\psi(t)\rangle = \sum_n c_n(t) e^{- i E_n t / \hbar} |n\rangle \]

where the \(c_{n}(t)\,\)s are undetermined complex functions of t which we will refer to as amplitudes (strictly speaking, they are the amplitudes in the Dirac picture). We have explicitly extracted the exponential phase factors \(\exp(- i E_n t / \hbar)\) on the right hand side. This is only a matter of convention, and may be done without loss of generality. The reason we go to this trouble is that when the system starts in the state \(|j\rangle\) and no perturbation is present, the amplitudes have the convenient property that, for all t, cj(t) = 1 and \(c_n (t) = 0\,\) if \(n\ne j\).

\[c_n^{(1)}(t) = \frac{-i}{\hbar} \sum_k \int_0^t dt' \; \langle {n}|V(t')|k\rangle \, c_k(0) \, e^{-i(E_k - E_n)t'/\hbar}\\ \text{where the former is used when } V= H_1 = \text{ constant; } H = H_{total} \]

Fermi's Golden Rule

In quantum physics, Fermi's golden rule is a way to calculate the transition rate (probability of transition per unit time) from one energy eigenstate of a quantum system into a continuum of energy eigenstates, due to a Perturbation theory perturbation.

We consider the system to begin in an eigenstate, \(\scriptstyle | i\rangle\), of a given Hamiltonian, \(\scriptstyle H_0 \). We consider the effect of a (possibly time-dependent) perturbing Hamiltonian, \(\scriptstyle H'\). If \(\scriptstyle H'\) is time-independent, the system goes only into those states in the continuum that have the same energy as the initial state. If \(\scriptstyle H'\) is oscillating as a function of time with an angular frequency \(\scriptstyle \omega\), the transition is into states with energies that differ by \(\scriptstyle \hbar\omega\) from the energy of the initial state. In both cases, the one-to-many transition probability per unit of time from the state \(\scriptstyle| i \rangle\) to a set of final states \(\scriptstyle| f\rangle\) is given, to first order in the perturbation, by \[ T_{i \rightarrow f}= \frac{2 \pi} {\hbar} \left | \langle f|H'|i \rangle \right |^{2} \rho,\] where \(\scriptstyle \rho \) is the density of final states (number of states per unit of energy) and \(\scriptstyle \langle f|H'|i \rangle \) is the matrix element (in bra-ket notation) of the perturbation \(\scriptstyle H'\) between the final and initial states.This transition probability is also called decay probability and is related to mean lifetime.

Fermi's golden rule is valid when the initial state has not been significantly depleted by scattering into the final states.

Scattering

General Wavefunction for Scattering Problem:

\[ \psi(\mathbf{r}) = A[e^{ikz} + f(\theta)\frac{e^{ikr}}{r}] \;, \]

Born Approximation $f(\theta)$:
 \[
 f(\theta)= -\frac{m}{2\pi \hbar^2} \int_V e^{i\mathbf{(\mathbf{k}'-\mathbf{k}).\mathbf{r}}} V(\mathbf{r}) d^3\mathbf{r}
  \]

, where k' is the incident beam direction.

Born Approximation $f(\theta)$ for central potential:
 \[
 f(\theta)= -\frac{2m}{\hbar^2} \int_0^\infty V(r) \frac{sin(qr)}{q} r dr
  \]

, where k' is the incident beam direction.


Unified coordinate wavefunction for scattering:
 \[
\psi(r, \theta) = A \sum_{\ell=0}^{\infty} i^\ell (2l+1)[j_\ell(kr) + ik a_\ell h_{\ell}^{(1)}(kr)]P_{\ell}(cos\theta)
\\ \text{ This wavefunction is zero at the boundary of a hard sphere r=a, allowing mulitplication by a Legendre Polynomial}\\ \text{ and consequent calculation of partial amplitude as follows:}\\
a_\ell = \frac{-j_\ell(k a)} {ik h_{\ell}^{(1)}(ka)}
  \]
Scattering Amplitude $f(\theta)$ in partial wave expansion:

In the partial wave expansion the scattering amplitude is represented as a sum over the partial waves, \[f(\theta)=\sum_{\ell=0}^\infty (2\ell+1) a_\ell(k) P_\ell(\cos(\theta)) \;,\] where \(a_\ell(k)\) is the partial amplitude and \(P_\ell(\cos(\theta))\) is the Legendre polynomial.

Partial amplitude $a_\ell(k)$

\[a_\ell = \frac{e^{2i\delta_\ell}-1}{2ik} = \frac{e^{i\delta_\ell} \sin\delta_\ell}{k} = \frac{1}{k\cot\delta_\ell-ik} \;.\]

Bessel Functions $j_0$ and $j_1$:

\[ j_0(x) = \frac{sin(x)}{x} \] \[ j_1(x) = \frac{sin(x)}{x^2} - \frac{cos(x)}{x} \]


General form of Bessel function and Neumann function and derivatives thereof:

\[j_l(x) = (-x)^l \left(\frac{1}{x}\frac{d}{dx}\right)^l\,\frac{\sin(x)}{x} ,\] \[n_l(x) = -(-x)^l \left(\frac{1}{x}\frac{d}{dx}\right)^l\,\frac{\cos(x)}{x}.\]

\[\left( \frac{1}{x} \frac{d}{dx} \right)^m \left[ x^\ell J_{\ell} (x) \right] = x^{\ell - m} J_{\ell - m} (x)\] \[\left( \frac{1}{x} \frac{d}{dx} \right)^m \left[ \frac{J_\ell (x)}{x^\ell} \right] = (-1)^m \frac{J_{\ell + m} (x)}{x^{\ell + m}}.\] where J also denotes Y, H(1), or H(2).

Hence, \[\frac{d J_{\ell}}{dx} = \frac{\ell J_{\ell}}{x} - J_{\ell+1}\]


Exercises

Find $E_n^{(1, 2)}$ for $V= cx$ in SHO using P.T. and compare to the exact solution:

\[ E_n^{(1)} = c \langle n^{(0)} | a + a^{\dagger} | n^{(0)} \rangle = 0 \] \[ E_n^{(1)} = - \frac{c^2}{2m \omega^2}\] \[\text{Lol and behold, this is exact:} \] \[H= \frac{p^2}{2m} + 1/2 m \omega^2 x^2 + cx = p^2/2m + 1/2 m \omega^2 (x + c/m \omega^2 )^2 - c^2/2m \omega^2\]

Stark Effect, i.e. polarization of H-atom by E-field:

\[H_1 = -qEz\]

Compare QM Course Notes.


EM

Electrostatics

Image charge coordinates
 \[
 q' = - \frac{a}{y}q \text{     ,      }  y' = \frac{a^2}{y}
  \]
Current in conductor
 \[
I = nAve = \frac{\partial Q}{\partial t} 
  \]
Electric and magnetic fields from potentials
 \[
\vec{E} = -\vec{\nabla} \phi - \frac{\partial \vec{A}}{\partial t}   \]\[
\vec{B} = \vec{\nabla} \times \vec{A}
\]

Waves

Maxwell's equations in vacuum:
 \[
\nabla \cdot \mathbf{E} = \frac{\rho}{\mathcal{E_0}} , \nabla \cdot \mathbf{B} = 0 \\


\nabla \times \mathbf{E} = - \frac{\partial \mathbf{B}} {\partial t}\ , \nabla \times \mathbf{B} = \mu_0\mathbf{J} + \mu_0 \varepsilon_0 \frac{\partial \mathbf{E}} {\partial t}\

  \]
Poynting's theorem:
 \[
\frac{\partial u}{\partial t} + \nabla\cdot\mathbf{S} = - \mathbf{J}\cdot\mathbf{E},
where  '''J'''  is the ''total'' current density and the energy density ''u'' is
u = \frac{1}{2}\left(\varepsilon_0 \mathbf{E}^2 + \frac{1}{\mu_0}\mathbf{B}^2\right)

  \]
Cutoff frequency for a rectangular waveguide:

\[ \omega_{c} = c \sqrt{\left(\frac{n \pi}{a}\right)^2 + \left(\frac{m \pi}{b}\right) ^2}, \] where \(n,m \ge 0\) are the mode numbers and a and b the lengths of the sides of the rectangle. For TE modes \( n,m \ge 0\) and \( n \ne m \), while for TM modes \( n, m \ge 1 \).

Maxwell's Stress Tensor:
\[
\overset{\leftrightarrow  }{ \mathbf{T}}_{ij} \equiv \epsilon_0 \left(E_i E_j - \frac{1}{2} \delta_{ij} E^2\right) + \frac{1}{\mu_0}  \left(B_i B_j - \frac{1}{2} \delta_{ij} B^2\right)\  
\]

Derivation from Wikipedia

  1. Starting with the Lorentz force law \[\mathbf{F} = q(\mathbf{E} + \mathbf{v}\times\mathbf{B})\] the force per unit volume for an unknown charge distribution is \[ \mathbf{f} = \rho\mathbf{E} + \mathbf{J}\times\mathbf{B} \]
  2. Next, ρ and J can be replaced by the fields E and B, using Gauss's law and Ampère's circuital law: \[ \mathbf{f} = \epsilon_0 \left(\boldsymbol{\nabla}\cdot \mathbf{E} \right)\mathbf{E} + \frac{1}{\mu_0} \left(\boldsymbol{\nabla}\times \mathbf{B} \right) \times \mathbf{B} - \epsilon_0 \frac{\partial \mathbf{E}}{\partial t} \times \mathbf{B}\, \]
  3. The time derivative can be rewritten to something that can be interpreted physically, namely the Poynting vector. Using the product rule and Faraday's law of induction gives \[\frac{\partial}{\partial t} (\mathbf{E}\times\mathbf{B}) = \frac{\partial\mathbf{E}}{\partial t}\times \mathbf{B} + \mathbf{E} \times \frac{\partial\mathbf{B}}{\partial t} = \frac{\partial\mathbf{E}}{\partial t}\times \mathbf{B} - \mathbf{E} \times (\boldsymbol{\nabla}\times \mathbf{E})\,\] and we can now rewrite f as \[\mathbf{f} = \epsilon_0 \left(\boldsymbol{\nabla}\cdot \mathbf{E} \right)\mathbf{E} + \frac{1}{\mu_0} \left(\boldsymbol{\nabla}\times \mathbf{B} \right) \times \mathbf{B} - \epsilon_0 \frac{\partial}{\partial t}\left( \mathbf{E}\times \mathbf{B}\right) - \epsilon_0 \mathbf{E} \times (\boldsymbol{\nabla}\times \mathbf{E})\,\], then collecting terms with E and B gives \[\mathbf{f} = \epsilon_0\left[ (\boldsymbol{\nabla}\cdot \mathbf{E} )\mathbf{E} - \mathbf{E} \times (\boldsymbol{\nabla}\times \mathbf{E}) \right] + \frac{1}{\mu_0} \left[ - \mathbf{B}\times\left(\boldsymbol{\nabla}\times \mathbf{B} \right) \right] - \epsilon_0\frac{\partial}{\partial t}\left( \mathbf{E}\times \mathbf{B}\right)\,\].
  4. A term seems to be "missing" from the symmetry in E and B, which can be achieved by inserting (∇ • B)B because of Gauss' law for magnetism: \[\mathbf{f} = \epsilon_0\left[ (\boldsymbol{\nabla}\cdot \mathbf{E} )\mathbf{E} - \mathbf{E} \times (\boldsymbol{\nabla}\times \mathbf{E}) \right] + \frac{1}{\mu_0} \left[(\boldsymbol{\nabla}\cdot \mathbf{B} )\mathbf{B} - \mathbf{B}\times\left(\boldsymbol{\nabla}\times \mathbf{B} \right) \right] - \epsilon_0\frac{\partial}{\partial t}\left( \mathbf{E}\times \mathbf{B}\right)\,\]. Eliminating the curls (which are fairly complicated to calculate), using the vector calculus identity \[\tfrac{1}{2} \boldsymbol{\nabla} (\mathbf{A}\cdot\mathbf{A}) = \mathbf{A} \times (\boldsymbol{\nabla} \times \mathbf{A}) + (\mathbf{A} \cdot \boldsymbol{\nabla}) \mathbf{A} \], leads to: \[\mathbf{f} = \epsilon_0\left[ (\boldsymbol{\nabla}\cdot \mathbf{E} )\mathbf{E} + (\mathbf{E}\cdot\boldsymbol{\nabla}) \mathbf{E} \right] + \frac{1}{\mu_0} \left[(\boldsymbol{\nabla}\cdot \mathbf{B} )\mathbf{B} + (\mathbf{B}\cdot\boldsymbol{\nabla}) \mathbf{B} \right] - \frac{1}{2} \boldsymbol{\nabla}\left(\epsilon_0 E^2 + \frac{1}{\mu_0} B^2 \right) - \epsilon_0\frac{\partial}{\partial t}\left( \mathbf{E}\times \mathbf{B}\right)\,\].
  5. This expression contains every aspect of electromagnetism and momentum and is relatively easy to compute. It can be written more compactly by introducing the Maxwell stress tensor, \[\overset{\leftrightarrow }{ \mathbf{T}}_{ij} \equiv \epsilon_0 \left(E_i E_j - \frac{1}{2} \delta_{ij} E^2\right) + \frac{1}{\mu_0} \left(B_i B_j - \frac{1}{2} \delta_{ij} B^2\right)\,\],

Radiation

Electric Dipole Radiation:

\[\mathbf{B} = \frac{\omega^2}{4\pi\varepsilon_0 c^3} (\hat{\mathbf{r}} \times \mathbf{p}) \frac{e^{i\omega r/c}}{r}\] \[\mathbf{E} = c \mathbf{B} \times \hat{\mathbf{r}}\]

which produces a total time-average radiated power P given by

\[P = \frac{\omega^4}{12\pi\varepsilon_0 c^3} |\mathbf{p}|^2.\]

Multipole

Torque on magnetic and electric dipoles:

\[ \boldsymbol{\tau} = \mathbf{p} \times \mathbf{E}\] for an electric dipole moment p (in coulomb-meters), or

\[ \boldsymbol{\tau} = \mathbf{m} \times \mathbf{B}\] for a magnetic dipole moment m (in ampere-square meters).

The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of

\[ U = -\mathbf{p} \cdot \mathbf{E}\].

The energy of a magnetic dipole is similarly

\[ U = -\mathbf{m} \cdot \mathbf{B}\].

Compare EM Course Notes.

StM

Ensembles

Ensemble, Entropy, Energy:

Microcanonical ensemble or NVE ensemble—a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.

Canonical ensemble or NVT ensemble—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium the system must remain totally closed (unable to exchange particles with its environment), and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.

Grand canonical ensemble or µVT ensemble—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.

General discrete and continuous partition function Z:
 \[
 Z = \sum_{s} \mathrm{e}^{- \beta E_s} \text{ or, more generally with degenerate energy levels, } Z = \sum_{j} g_j \cdot \mathrm{e}^{- \beta E_j}
  \]


Particle Distributions

$n(\epsilon)$ and $Z_1$:

\[ n(\epsilon)_{MB} = e^{-\beta (\epsilon_n - \mu)} \\ n(\epsilon)_{FD}= \frac{1}{e^{\beta(\epsilon_n - \mu)} +1} \\ n(\epsilon)_{BE} = \frac{1}{e^{\beta(\epsilon_n - \mu)} - 1} \\ Z_1 = \sum_{n=0}^{\infty} e^{-\beta\epsilon_n} \]


Definitions

Ratio of heat capacities for ideal gases:

\[ \gamma = \frac{C_P}{C_V} = \frac{c_P}{c_V} = \frac{H}{U}\]

\[ C_P = \frac{\gamma n R}{\gamma - 1} \qquad \mbox{and} \qquad C_V = \frac{n R}{\gamma - 1}\]

\[C_V = C_P - nR\]

Specific Heat Capacity:

The internal energy of a closed system changes either by adding heat to the system or by the system performing work. Written mathematically we have \[ \mathrm{d}U = \delta Q + \delta W \]. For work as a result of an increase of the system volume we may write, \[ \mathrm{d}U = \delta Q - P\mathrm{d}V \]. If the heat is added at constant volume, then the second term of this relation vanishes and one readily obtains \[\left(\frac{\partial U}{\partial T}\right)_V=\left(\frac{\partial Q}{\partial T}\right)_V=C_V\]. This defines the heat capacity at constant volume, CV. Another useful quantity is the heat capacity at constant pressure, CP. The enthalpy of the system is given by \[ H = U + PV \]. A small change in the enthalpy can be expressed as \[ \mathrm{d}H = \delta Q + V \mathrm{d}P \], and therefore, at constant pressure, we have \[\left(\frac{\partial H}{\partial T}\right)_P=\left(\frac{\partial Q}{\partial T}\right)_P=C_P\].

Equipartition breakdown:

To illustrate the breakdown of equipartition, consider the average energy in a single (quantum) harmonic oscillator, which was discussed above for the classical case. Neglecting the irrelevant zero-point energy term, its quantum energy levels are given by En = nhν, where h is the Planck constant, ν is the fundamental frequency of the oscillator, and n is an integer. The probability of a given energy level being populated in the canonical ensemble is given by its Boltzmann factor

\[ P(E_{n}) = \frac{e^{-n\beta h\nu}}{Z}, \]

where β = 1/kBT and the denominator Z is the partition function, here a geometric series

\[ Z = \sum_{n=0}^{\infty} e^{-n\beta h\nu} = \frac{1}{1 - e^{-\beta h\nu}}. \]

Its average energy is given by

\[ \langle H \rangle = \sum_{n=0}^{\infty} E_{n} P(E_{n}) = \frac{1}{Z} \sum_{n=0}^{\infty} nh\nu \ e^{-n\beta h\nu} = -\frac{1}{Z} \frac{\partial Z}{\partial \beta} = -\frac{\partial \log Z}{\partial \beta}. \]

Substituting the formula for Z gives the final result

\[ \langle H \rangle = h\nu \frac{e^{-\beta h\nu}}{1 - e^{-\beta h\nu}}. \]

At high temperatures, when the thermal energy kBT is much greater than the spacing between energy levels, the exponential argument βhν is much less than one and the average energy becomes kBT, in agreement with the equipartition theorem (Figure 10). However, at low temperatures, when  >> kBT, the average energy goes to zero—the higher-frequency energy levels are "frozen out" (Figure 10). As another example, the internal excited electronic states of a hydrogen atom do not contribute to its specific heat as a gas at room temperature, since the thermal energy kBT (roughly 0.025 eV) is much smaller than the spacing between the lowest and next higher electronic energy levels (roughly 10 eV).

Similar considerations apply whenever the energy level spacing is much larger than the thermal energy. For example, this reasoning was used by Max Plank and Albert Einstein to resolve the ultraviolet catastrophe of blackbody radiation. The paradox arises because there are an infinite number of independent modes of the electromagnetic field in a closed container, each of which may be treated as a harmonic oscillator. If each electromagnetic mode were to have an average energy kBT, there would be an infinite amount of energy in the container. However, by the reasoning above, the average energy in the higher-frequency modes goes to zero as ν goes to infinity; moreover, Planck's law of black body radiation, which describes the experimental distribution of energy in the modes, follows from the same reasoning.

Maxwell relations:

\[ \left(\frac{\partial \mu}{\partial P}\right)_{S, N} = \left(\frac{\partial V}{\partial N}\right)_{S, P}\qquad= \frac{\partial^2 H }{\partial P \partial N} \]

where μ is the chemical potential. Each equation can be re-expressed using the relationship

\[\left(\frac{\partial y}{\partial x}\right)_z = \frac{1}{\left(\frac{\partial x}{\partial y}\right)_z}.\]


Common uses with corresponding thermodynamic potentials: \begin{align} +\left(\frac{\partial T}{\partial V}\right)_S &=& -\left(\frac{\partial P}{\partial S}\right)_V &=& \frac{\partial^2 U }{\partial S \partial V}\\ +\left(\frac{\partial T}{\partial P}\right)_S &=& +\left(\frac{\partial V}{\partial S}\right)_P &=& \frac{\partial^2 H }{\partial S \partial P}\\ +\left(\frac{\partial S}{\partial V}\right)_T &=& +\left(\frac{\partial P}{\partial T}\right)_V &=& -\frac{\partial^2 A }{\partial T \partial V}\\ -\left(\frac{\partial S}{\partial P}\right)_T &=& +\left(\frac{\partial V}{\partial T}\right)_P &=& \frac{\partial^2 G }{\partial T \partial P} \end{align}

Partition Function (unabridged)

The canonical partition function is

\[ Z = \sum_{s} \mathrm{e}^{- \beta E_s}\] ,

where the "inverse temperature", β, is conventionally defined as

\[\beta \equiv \frac{1}{k_BT}\]

with kB denoting Boltzmann's constant. The exponential factor exp(−βEs) is known as the Boltzmann factor. (For a detailed derivation of this result, see canonical ensemble). In systems with multiple quantum states s sharing the same Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j ) as follows:

\[ Z = \sum_{j} g_j \cdot \mathrm{e}^{- \beta E_j}\],

where gj is the degeneracy factor, or number of quantum states s which have the same energy level defined by Ej = Es.

The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In classical statistical mechanics, it is not really correct to express the partition function as a sum of discrete terms, as we have done. In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In this case we must describe the partition function using an integral rather than a sum. For instance, the partition function of a gas of N identical classical particles is

\[Z=\frac{1}{N! h^{3N}} \int \, \exp[-\beta H(p_1 \cdots p_N, x_1 \cdots x_N)] \; d^3p_1 \cdots d^3p_N \, d^3x_1 \cdots d^3x_N \]

where

pi indicate particle momenta
xi indicate particle positions
d3 is a shorthand notation serving as a reminder that the pi and xi are vectors in three dimensional space, and
H is the classical Hamiltonian.

The reason for the factorial factor N! is discussed below. For simplicity, we will use the discrete form of the partition function in this article. Our results will apply equally well to the continuous form. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. To make it into a dimensionless quantity, we must divide it by h3N where h is some quantity with units of action (usually taken to be Planck's constant).

In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):

\[Z=\operatorname{tr} ( \mathrm{e}^{-\beta\hat{H}} )\] ,

where Ĥ is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series. The classical form of Z is recovered when the trace is expressed in terms of coherent states [1] and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, one inserts under the trace for each degree of freedom the identity: \[ \boldsymbol{1} = \int |x,p\rangle\,\langle x,p|~\frac{ dx\, dp}{h} \] where Template:!x, pTemplate:Rangle is a normalised Gaussian wavepacket centered at position x and momentum p. Thus, \[ Z = \int \operatorname{tr} \left( \mathrm{e}^{-\beta\hat{H}} |x,p\rangle\,\langle x,p| \right) \frac{ dx\, dp}{h} = \int\langle x,p| \mathrm{e} ^{-\beta\hat{H}}|x,p\rangle ~\frac{ dx\, dp}{h} \] A coherent state is an approximate eigenstate of both operators \( \hat{x} \) and \( \hat{p} \), hence also of the Hamiltonian Ĥ, with errors of the size of the uncertainties. If Δx and Δp can be regarded as zero, the action of Ĥ reduces to multiplication by the classical Hamiltonian, and Z reduces to the classical configuration integral.

Meaning and significance

It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, let us consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.

The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is

\[P_s = \frac{1}{Z} \mathrm{e}^{- \beta E_s}. \]

The partition function thus plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:

\[\sum_s P_s = \frac{1}{Z} \sum_s \mathrm{e}^{- \beta E_s} = \frac{1}{Z} Z = 1. \]

This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. The letter Z stands for the German word Zustandssumme, "sum over states". This notation also implies another important meaning of the partition function of a system: it counts the (weighted) number of states a system can occupy. Hence if all states are equally probable (equal energies) the partition function is the total number of possible states. Often this is the practical importance of Z.

Calculating the thermodynamic total energy

In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:

\[\langle E \rangle = \sum_s E_s P_s = \frac{1}{Z} \sum_s E_s e^{- \beta E_s} = - \frac{1}{Z} \frac{\partial}{\partial \beta} Z(\beta, E_1, E_2, \cdots) = - \frac{\partial \ln Z}{\partial \beta} \]

or, equivalently,

\[\langle E\rangle = k_B T^2 \frac{\partial \ln Z}{\partial T}.\]

Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner

\[E_s = E_s^{(0)} + \lambda A_s \qquad \mbox{for all}\; s \]

then the expected value of A is

\[\langle A\rangle = \sum_s A_s P_s = -\frac{1}{\beta} \frac{\partial}{\partial\lambda} \ln Z(\beta,\lambda).\]

This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.

Relation to thermodynamic variables

In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.

As we have already seen, the thermodynamic energy is

\[\langle E \rangle = - \frac{\partial \ln Z}{\partial \beta}.\]

The variance in the energy (or "energy fluctuation") is

\[\langle (\Delta E)^2 \rangle \equiv \langle (E - \langle E\rangle)^2 \rangle = \frac{\partial^2 \ln Z}{\partial \beta^2}.\]

The heat capacity is

\[C_v = \frac{\partial \langle E\rangle}{\partial T} = \frac{1}{k_B T^2} \langle (\Delta E)^2 \rangle.\]

The entropy is

\[S \equiv -k_B\sum_s P_s\ln P_s= k_B (\ln Z + \beta \langle E\rangle)=\frac{\partial}{\partial T}(k_B T \ln Z) =-\frac{\partial A}{\partial T}\]

where A is the Helmholtz free energy defined as A = UTS, where U = Template:LangleETemplate:Rangle is the total energy and S is the entropy, so that

\[A = \langle E\rangle -TS= - k_B T \ln Z.\]

Partition functions of subsystems

Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:

\[Z =\prod_{j=1}^{N} \zeta_j.\]

If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case

\[Z = \zeta^N.\]

However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):

\[Z = \frac{\zeta^N}{N!}.\]

This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.

Grand canonical partition function Template:Main

We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.

The grand canonical partition function, denoted by \(\mathcal{Z}\), is the following sum over microstates \[ \mathcal{Z}(\mu, V, T) = \sum_{i} \exp((N_i\mu - E_i)/k_B T). \] Here, each microstate is labelled by \(i\), and has total particle number \(N_i\) and total energy \(E_i\). This partition function is closely related to the Grand potential, \(\Phi_{\rm G}\), by the relation \[ -k_B T \ln \mathcal{Z} = \Phi_{\rm G} = \langle E \rangle - TS - \mu \langle N\rangle. \] This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.

It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state \(i\): \[ p_i = \frac{1}{\mathcal Z} \exp((N_i\mu - E_i)/k_B T) .\]

An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi-Dirac statistics for fermions, Bose-Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.

Compare StM Course Notes.

CM

Lagrangian Mechanics

Method for finding normal modes of system

Since we expect oscillatory motion of a normal mode (where ω is the same for both masses), we try: \[ x_1(t) = A_1 e^{i \omega t} \\ x_2(t) = A_2 e^{i \omega t} \]

Euler-Lagrange Equation of system without constraints:

\[\frac{\mathrm{d}}{\mathrm{d}t} \left ( \frac {\partial L}{\partial \dot{q}_j} \right ) = \frac {\partial L}{\partial q_j} \]

Hamiltonian Mechanics

Hamiltonian i.t.o. Lagrangian:

\[H = \dot{\mathbf{q}} \frac{\partial \mathcal{L}}{\partial \dot{\mathbf{q}}} - \mathcal{L} \]

Transform from rotating to fixed frame:

\[ \vec{v}' = \vec{v} - \vec{\omega} \times \vec{r} \]

Relativity

Write the product $\gamma\beta$ in terms of only $\gamma$:

\[\gamma \beta = \sqrt{\frac{\beta^2}{1- \beta^2}}= \sqrt{\frac{A}{1-\beta^2} + B} \\ \text{In the words of the notorious J.D. Jackson, ''we see'' that }A = 1 \text{ and }B = -1 \text{ , such that } \\ \gamma \beta = \sqrt{\frac{1}{1-\beta^2} - 1}=\sqrt{\gamma^2 - 1} \] E.g.: Lim#3021

Compare CM Course Notes.

General

Laplacian:

Polar Coordinates
 \[
\nabla^2 \psi =\frac{1}{r} \frac{\partial}{\partial r} (r \frac{\partial}{\partial r}) + \frac{1}{r^2}\frac{\partial^2}{\partial \theta^2}
  \]
Cylindrical Coordinates
 \[
\nabla^2 \psi =\frac{1}{r} \frac{\partial}{\partial r} (r \frac{\partial}{\partial r}) + \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} +  \frac{\partial^2}{\partial z^2}
  \]
Spherical Coordinates:
 \[
    \nabla^2  =\frac{1}{r^2} \frac{\partial}{\partial r} (r^2 \frac{\partial}{\partial r}) + \frac{1}{r^2 sin\theta}\frac{\partial}{\partial \theta}( sin\theta  \frac{\partial}{\partial \theta}) + \frac{1}{r^2 sin^2\theta}\frac{\partial^2}{\partial \phi^2}
  \]
Spherical Coordinates with $\vec{L}$:

\[ \Delta =\frac{\partial^2}{\partial r^2} + \frac{2}{r} \frac{\partial}{\partial r} - \frac{\vec{L}^2}{\hbar^2 r^2} \]

Alternative for radial part of Laplacian:
 \[
     \nabla^2 R =\frac{1}{r^2} \frac{\partial}{\partial r} (r^2 \frac{\partial R}{\partial r}) \\
     \nabla^2 R =\frac{1}{r} \frac{\partial^2}{\partial r^2} (r R)
  \]

Expansions and General Solutions

Taylor Expansion:

The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable function at a real or complex number a is the power series \[f(x) \approx f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots. \]

which can be written in the more compact sigma notation as

\[ f(x) \approx \sum_{n=0} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}\]

Binomial expansion of $(1+x)^n$:

\[(1+x)^n = \sum_{k=0}^n {n \choose k}x^k = 1 + nx + \frac{n(n-1) x^2}{2!} + \frac{n(n-1)(n-2) x^3}{3!} +...+ \frac{n! x^n}{n!} \], where \[{n \choose k} = \frac{n!}{k!\,(n-k)!}\]

Rewrite and solve $\nabla \left( \frac{\hat{r}}{r}\right )$:

\[\nabla \left( \frac{\hat{r}}{r}\right )=\nabla \left( \frac{\vec{r}}{r^2} \right ) = - 4 \pi \delta^3 (\vec{r})\]

Write Spherical Harmonic $ Y_\ell^m( \theta , \varphi )$ in terms of Legendre Polynomial:

\[ Y_\ell^m( \theta , \varphi ) = (-1)^m \sqrt{{(2\ell+1)\over 4\pi}{(\ell-m)!\over (\ell+m)!}} \, P_\ell^m ( \cos{\theta} ) \, e^{i m \varphi } \]

Write Legendre Polynomial $\mathrm{P}_\ell(cos(\theta' - \theta))$ in terms of Spherical Harmonics:

Use the addition theorem for spherical harmonics. This is a generalization of the trigonometric identity \[\cos(\theta'-\theta)=\cos\theta'\cos\theta + \sin\theta\sin\theta'\]

Consider two unit vectors x and y, having spherical coordinates (θ,φ) and (θ′,φ′), respectively. The addition theorem states

\[\mathrm{P}_\ell(cos(\theta' - \theta)) = P_\ell( \mathbf{\hat{x}}\cdot\mathbf{\hat{y}} ) = \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^\ell Y_{\ell m}^*(\theta',\varphi') \, Y_{\ell m}(\theta,\varphi). \]

Legendre Expansion of $\frac{1}{|x- x'|}$:

\[ \frac{1}{\left| \mathbf{x}(\theta, \phi)-\mathbf{x'}(\theta', \phi') \right|} = \frac{1}{\sqrt{r^2+r^{\prime 2}-2rr'\cos(\theta' -\theta)}} = \sum_{\ell=0}^{\infty} \frac{r^{\prime \ell}}{r^{\ell+1}} P_{\ell}(\cos(\theta' -\theta)) = \\ = \sum_{\ell=0}^{\infty} \frac{r^{\prime \ell}}{r^{\ell+1}} \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^\ell Y_{\ell m}^*(\theta',\varphi') \, Y_{\ell m}(\theta,\varphi) \]

Legendre Polynomial orthogonality:

\[\int_{-1}^{1} P_\ell(x) P_\ell'(x)\,dx = {2 \over {2\ell + 1}} \delta_{\ell\ell'}\]

Spherical Harmonic orthogonality:

\[ \int_{\theta=0}^\pi\int_{\varphi=0}^{2\pi}Y_\ell^m \, Y_{\ell'}^{m'*}d\Omega=\delta_{\ell\ell'}\, \delta_{mm'}.\]

General Solution of Laplacian in spherical coordinates with azimuthal symmetry:

\[ \Phi(r,\theta)=\sum_{\ell=0}^{\infty} \left[ A_\ell r^\ell + B_\ell r^{-(\ell+1)} \right] P_\ell(\cos\theta). \]

First few Spherical Harmonics, e.g. $\cos(\theta)$:

\[Y_{0}^{0}(\theta,\varphi)=\sqrt{1\over 4\pi}\] \[Y_{1}^{0}(\theta,\varphi)=\sqrt{3\over 4\pi}\, \cos\theta\] \[Y_{1}^{\pm 1}(\theta,\varphi)=\mp \sqrt{3\over 8\pi}\, \sin\theta\, e^{\pm i\varphi}\] E.g. S'06Q3.

The first few Legendre polynomials:
$\ell$ \(P_\ell(x)\,\)
0 \(1\,\)
1 \(x\,\)
2 \(\begin{matrix}\frac12\end{matrix} (3x^2-1) \,\)
3 \(\begin{matrix}\frac12\end{matrix} (5x^3-3x) \,\)
4 \(\begin{matrix}\frac18\end{matrix} (35x^4-30x^2+3)\,\)

Note that $P_\ell(1) = P_0(x)= 1$.

Integrals

Gaussian integral:

\[ \int_{-\infty}^{\infty} \mathrm{e}^{- a x^2} \mathrm{d}x= \sqrt{\frac{\pi}{a}} \text{ , for a > 0.} \]

Delta function:
\[ \hat{\delta}(k)= \int_{-\infty}^\infty \delta(x) \mathrm{e}^{-i kx} \mathrm{d}x = 1 \\ \delta(x) =\mathrm{FT}^{-1} [\hat{\delta}(k) ] = \int_{-\infty}^\infty \mathrm{e}^{ikx} \frac{\mathrm{d}k}{2 \pi} \]
Delta function identities:

\[\int_{-\infty}^\infty \delta(\alpha x)\,dx =\int_{-\infty}^\infty \delta(u)\,\frac{du}{|\alpha|} =\frac{1}{|\alpha|}\]

\[\delta(\alpha x) = \frac{\delta(x)}{|\alpha|}.\]

In particular, the delta function is an even distribution, in the sense that

\[\delta(-x) = \delta(x)\]

Integrate $\int e^{at^2 + bt + c} dt$ by expanding the square, if you dare (caveat integrator!):

\[ \int e^{at^2 + bt + c} dt = e^{c-\frac{b^2}{4a}} \int e^{u^2} \frac{du}{\sqrt{a}} =e^{c-\frac{b^2}{4a}} \sqrt{\frac{\pi}{a}} \]

Other potentially useful integrals:

\[ \int_0^\infty x^n e^{-ax} dx = a^{-1-n} \Gamma(n+1) \]\[ \int_{V} \frac{e^{i\mathbf{x.y}}}{|\mathbf{x}|} d^3 x = \frac{4 \pi}{|\mathbf{y}|^2}\]\[ \int_{-\infty}^{\infty} \frac{1}{1+ x^2} dx=\pi \]

Trigonometry

Trig Identity involving $\mathrm{tan^2}(\theta)$:

\(1 + \tan^2\theta = \sec^2\theta\quad\text{and}\quad 1 + \cot^2\theta = \csc^2\theta.\!\)

Hyperbolic functions

Hyperbolic sine:

\[\sinh x = \frac {e^x - e^{-x}} {2} = \frac {e^{2x} - 1} {2e^x} = \frac {1 - e^{-2x}} {2e^{-x}}\]

Hyperbolic cosine:

\[\cosh x = \frac {e^x + e^{-x}} {2} = \frac {e^{2x} + 1} {2e^x} = \frac {1 + e^{-2x}} {2e^{-x}}\]

Hyperbolic tangent:

\[\tanh x = \frac{\sinh x}{\cosh x} = \frac {e^x - e^{-x}} {e^x + e^{-x}} = \frac{e^{2x} - 1} {e^{2x} + 1} = \frac{1 - e^{-2x}} {1 + e^{-2x}}\\ \sinh (-x) = -\sinh x \\ \cosh (-x) = \cosh x\\ \cosh^2 x - \sinh^2 x = 1 \]

The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule, states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems \[\begin{align} \sinh(x + y) &= \sinh (x) \cosh (y) + \cosh (x) \sinh (y) \\ \cosh(x + y) &= \cosh (x) \cosh (y) + \sinh (x) \sinh (y) \\ \tanh(x + y) &= \frac{\tanh (x) + \tanh (y)}{1 + \tanh (x) \tanh (y)} \end{align}\]

Trig Identity Derivation:
\[ \begin{align} & {} \quad \left(\begin{array}{rr} \cos\alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha \end{array}\right) \left(\begin{array}{rr} \cos\beta & -\sin\beta \\ \sin\beta & \cos\beta \end{array}\right) = \left(\begin{array}{rr} \cos\alpha\cos\beta - \sin\alpha\sin\beta & -\cos\alpha\sin\beta - \sin\alpha\cos\beta \\ \sin\alpha\cos\beta + \cos\alpha\sin\beta & -\sin\alpha\sin\beta + \cos\alpha\cos\beta \end{array}\right) = \left(\begin{array}{rr} \cos(\alpha+\beta) & -\sin(\alpha+\beta) \\ \sin(\alpha+\beta) & \cos(\alpha+\beta) \end{array}\right) \end{align} \]


Exercises

EM: Obtain the non-relativistic Larmor radiation equation from the relativistic one. Click [Expand] to see the solution. (S'08Q10)

From Wikipedia: \[ P = \frac{2}{3}\frac{q^2}{c^3m^2}\left(\frac{d\vec{p}}{dt}\cdot\frac{d\vec{p}}{dt}\right). \]

Assume the generalisation;

\[ P = -\frac{2}{3}\frac{q^2}{m^2c^3}\frac{dP^{\mu}}{d\tau}\frac{dP_{\mu}}{d\tau}. \]

When we expand and rearrange the energy-momentum four vector product we get:

\[ \frac{dP^{\mu}}{d\tau}\frac{dP_{\mu}}{d\tau} = \frac{v^2}{c^2}\left(\frac{dp}{d\tau}\right)^2 - \left(\frac{d\vec{p}}{d\tau}\right)^2 \] where I have used the fact that \[ \frac{dE}{d\tau} = \frac{pc^2}{E}\frac{dp}{d\tau} = v\frac{dp}{d\tau} \] When you let \(\beta\) tend to zero, \(\gamma\) tends to one, so that \(d\tau\) tends to dt. Thus we recover the non relativistic case.

Yes check.svg.png[1] X mark.svg.png[0] Question Mark.svg.png [0]


References

  1. J. R. Klauder, B.-S. Skagerstam, Coherent States --- Applications in Physics and Mathematical Physics, World Scientific, 1985, p. 71-73.
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox