Consider an atomic two-level system with ground state \(|0\rangle\) and excited state \(|1\rangle\). Recall that in the interaction picture, after making the rotating wave approximation and boosting into a steady-state rotating frame, one had the resultant time-independent steady-state Hamiltonian:
Invoking the identity of Pauli matrices \((\tilde{\boldsymbol{\Omega}}\cdot\boldsymbol{\sigma})^2=|\tilde{\boldsymbol{\Omega}}|^21\), it is clear that the eigenvalues of this Hamiltonian are thus \(E_{\pm}=\pm\frac{\hbar|\tilde{\boldsymbol{\Omega}}|}{2}=\frac{\hbar\sqrt{\Omega^2+\delta^2}}{2}\), and this is known as the light shift resulting from the AC Stark effect (also called the Autler-Townes effect). In particular, if \(\Omega=0\) then \(E_{\pm}=\).
It is not a coincidence that this light shift calculated from time-independent perturbation theory, after a first-order binomial expansion, to the result of first-order nondegenerate time-independent perturbation theory applied to … turns out in the framework of QED that these correspond to so-called dressed states of the atom-photon system.
The purpose of this post is to explain the \(2\) key models of classical optics, namely geometrical optics (also known as ray optics) and physical optics (also known as wave optics). Although historically geometrical optics came before physical optics, and indeed this is also usually the order in which they are conventionally taught, this post will take the more unconventional approach of presenting physical optics first, and then showing how it reduces to geometrical optics in the \(\lambda\to 0\) limit.
Physical Optics
Discuss:
Fourier optics in the Fraunhofer regime.
Gaussian (pilot) beams
TE/TM/TEM modes in EM waveguides
How Fresnel diffraction is an exact solution to the paraxial Helmholtz equation and what this has to do with the eikonal approximation/Hamilton-Jacobi equation from classical dynamics.
Maxwell’s equations assert that the electric and magnetic field \(\textbf E,\textbf B\) satisfy vector wave equations in vacuum:
Any of their \(6\) components (denoted \(\psi\)) thus satisfy the scalar wave equation \(|\frac{\partial}{\partial\textbf x}|^2\psi+\frac{1}{c^2}\ddot{\psi}=0\). The spacetime Fourier transform yields the trivial dispersion relation \(\omega=ck\) from which it is evident that performing just a temporal Fourier transform (to avoid the minutiae of \(t\)-dependence) leads to the scalar Helmholtz equation for \(\psi(\textbf x)\):
In other words, one is looking for eigenfunctions of the Laplacian \(\biggr|\frac{\partial}{\partial\textbf x}\biggr|^2\) with eigenvalue \(-k^2\). To begin, consider one of Green’s identities, valid for arbitrary scalar fields \(\psi(\textbf x’),\tilde{\psi}(\textbf x’)\) which are \(C^2\) everywhere in the volume \(V\):
(it’s just the divergence theorem applied to the vector field \(\psi\frac{\partial\tilde{\psi}}{\partial\textbf x’}-\tilde{\psi}\frac{\partial\psi}{\partial\textbf x’}\)). It is now obvious that the volume integral will vanish if one then imposes that both \(\psi(\textbf x’)\) and \(\tilde{\psi}(\textbf x’)\) also satisfy the scalar Helmholtz equation. Given any point \(\textbf x\in\textbf R^3\), it is physically clear that the spherical wave Green’s function \(\tilde{\psi}(\textbf x’|\textbf x)=e^{ik|\textbf x-\textbf x’|}/|\textbf x-\textbf x’|\) is one possible (though certainly not a unique) solution to the scalar Helmholtz equation, provided one stays away from thesingularity at \(\textbf x’=\textbf x\). This motivates the choice of volume \(V\) to be some arbitrary region but with an \(\varepsilon\)-ball cut around \(\textbf x\), in which case the volume integral can legitimately be taken to vanish over this choice of \(V\). In that case, the surface \(\partial V=S^2_{\varepsilon}\cup S\) can be partitioned into an inner surface \(S^2_{\varepsilon}\) and an outer surface \(S\):
The flux through these two surfaces \(S^2_{\varepsilon},S\) must thus be equal:
The integral over \(S^2_{\varepsilon}=\{\textbf x’\in\textbf R^3:|\textbf x’-\textbf x|=\varepsilon\}\) is straightforward in the limit \(\varepsilon\to 0\):
As an aside, Kirchoff’s integral formula is very similar in spirit to another more well-known integral formula, namely the Cauchy integral formula \(f(z_0)=\frac{1}{2\pi i}\oint_{z\in\gamma:z_0\in\text{int}(\gamma)}\frac{f(z)}{z-z_0}dz\) from complex analysis; the constraint of complex analyticity is analogous to constraining \(\psi\) to obey the Helmholtz equation; if one specifies both Dirichlet and Neumann boundary conditions for \(\psi(\textbf x’)\) everywhere on \(\textbf x’\in S\), then in principle this is enough to uniquely determine \(\psi(\textbf x)\) everywhere in the interior \(V\) of the enclosing surface \(S\).
Now consider the following standard diffraction setup:
Here the surface \(S\) is chosen to be a sphere of radius \(R\) centered at \(\textbf x\), except where it flattens along the aperture with some distribution of slits. As one takes \(R\to\infty\), then in analogy to Jordan’s lemma from complex analysis, one can argue that the flux through this spherical cap portion of \(S\) in Kirchoff’s integral formula vanishes like \(\sim 1/R\to 0\) (this is admittedly still a bit handwavy, for a rigorous argument see the Sommerfeld radiation condition). Thus, the behavior of \(\psi\) on the aperture alone is sufficient to determine its value \(\psi(\textbf x)\) at an arbitrary “screen location” \(\textbf x\) beyond the aperture. Supposing a monochromatic plane wave \(\psi(\textbf x’)=\psi(x’,y’,0)e^{ikz’}\) of momentum \(k\) (hence solving the scalar Helmholtz equation) is normally incident on the aperture \(z’=0\) and that \(k|\textbf x-\textbf x’|\gg 1\) (easily true in most cases), this imposes the boundary condition \(-\frac{\partial\psi}{\partial z’}(x’,y’,0)=-ik\psi(x’,y’,0)\) so one can check that Kirchoff’s integral formula simplifies to:
where the obliquity kernel is \(K(\textbf x-\textbf x’):=\frac{1+\cos\angle(\textbf x-\textbf x’,\hat{\textbf k})}{2}=\cos^2\frac{\angle(\textbf x-\textbf x’,\hat{\textbf k})}{2}\). This is nothing more than a mathematical expression of the Huygens-Fresnel principle.
Fresnel vs. Fraunhofer Diffraction
In general the Huygens-Fresnel integral is difficult to evaluate analytically for an arbitrary point \(\textbf x\) on a screen. Thus, one often begins by making the paraxial approximation \(K(\textbf x-\textbf x’)\approx 1\iff |\textbf x-\textbf x’|\approx z\), except in the complex exponential (otherwise all Huygens wavelets would interfere constructively which is silly). Here instead, one implements a less strict version of the paraxial approximation in the form of a \(z^2\gg |\textbf x-\textbf x’|^2-z^2\) binomial expansion:
In practice the quadratic term is negligible in the paraxial limit, so neglecting it and all higher-order terms yields the Fresnel diffraction integral:
where \(\textbf k=k\textbf x/z\). If in addition one also assumes that \(k|\textbf x’|^2/2z\ll 1\), then one obtains the Fourier optics case of far-field/Fraunhofer diffraction:
The reason that Fraunhofer diffraction is only considered to apply in the far-field is that the above condition for its validity can be rewritten as \(z\gg z_R\) where \(z_R:=\rho’^2/\lambda\) is the Rayleigh distance of an aperture of typical length scale \(\rho’\sim\sqrt{x’^2+y’^2}\) when illuminated by monochromatic light of wavelength \(\lambda\). In other words, the precise meaning of “far-field” is “farther than the Rayleigh distance \(z_R\)”. Otherwise, when \(z≲z_R\) then such a term cannot be neglected and one simply refers to it as Fresnel diffraction. Notice that \(z≲z_R\) is not saying the same thing as \(z\ll z_R\); it is often said that Fresnel diffraction is the regime of near-field diffraction but that phrase can be misleading because it suggests that \(z\) can be arbitrarily small, yet clearly at some point if one kept decreasing \(z\) then eventually the higher-order terms in the binomial expansion would also start to matter (moreover, the paraxial approximation would also start to break down). Instead of calling it “near-field diffraction”, a more accurate name for Fresnel diffraction would be “not-far-enough diffraction” \(z≲z_R\). By contrast, Fraunhofer diffraction truly is arbitrarily far-field \(z\gg z_R\). Of course nothing stops one from also considering the case \(z\ll z_R\), there just doesn’t seem to be any special name given to this regime and in practice it’s not as relevant. Finally, sometimes one also encounters the terminology of the Fresnel number \(F(z):=z_R/z\); in this jargon, Fraunhofer diffraction occurs when \(F(z)\ll 1\) whereas Fresnel diffraction occurs when \(F(z)≳1\).
In practice, one typically ignores the pre-factor in front of the aperture integrals since it is the general profile of the irradiance \(|\psi(\textbf x)|^2\) that is mainly of interest. In this case, particular for Fraunhofer diffraction, one can write \(\hat{\psi}(\textbf k)\equiv\psi(\textbf x)\) as just the \(2\)D spatial Fourier transform of the aperture.
Diffraction Through a Single Slit
Consider a single \(y\)-invariant slit of width \(\Delta x\) centered at \(x’=0\). Then the Fraunhofer interference pattern has the form:
Although in general such an integral needs to be evaluated numerically, there is a simple geometric way to gain some intuition for how \(\psi(x)\in\textbf C\) behaves at fixed \(z≲z_R\sim\Delta x^2/\lambda\) via the Cornu spiral (also called the Euler spiral in contexts outside of physical optics). The idea is to shift the \(x\)-dependence from the integrand into the limits via the substitution \(\pi t’^2/2:=k(x-x’)^2/2z\iff t’=\sqrt{\frac{2}{\lambda z}}(x-x’)\). Then, ignoring chain rule factors, one has:
where the limits are \(t’_1(x)=\sqrt{\frac{2}{\lambda z}}(x+\Delta x/2)\) and \(t’_2(x)=\sqrt{\frac{2}{\lambda z}}(x-\Delta x/2)\). Written in terms of the normalized Fresnel integral \(\text{Fr}(t):=\int_0^{t}e^{i\pi t’^2/2}dt’\):
The object \(\text{Fr}(t)\) is a trajectory in \(\textbf C\) which is the aforementioned Cornu spiral:
where one can check the limits \(\lim_{t\to\pm\infty}\text{Fr}(t)=\pm(1+i)/2\). Noting that \(\dot{\text{Fr}}(t)=e^{i\pi t^2/2}\), it follows that the speed \(|\dot{\text{Fr}}(t)|=1\) is uniform and thus the distance/arc length traversed in time \(\Delta t\) is always just \(\Delta t\). Moreover, the curvature \(\kappa(t)=\Im(\dot{\text{Fr}}^{\dagger}\ddot{\text{Fr}})/|\dot{\text{Fr}}|^3=\pi t\) increases linearly in the arc length \(t\) (essentially what defines a spiral!). The point is that the irradiance \(|\psi(x)|^2\sim|\text{Fr}(t’_2(x))-\text{Fr}(t’_1(x))|^2\) is now visually just the length (squared) of a vector between the points \(\text{Fr}(t’_1(x))\) and \(\text{Fr}(t’_2(x))\) on the Cornu spiral. The idea is to first trek a distance \(\frac{t’_1(x)+t’_2(x)}{2}=\sqrt{\frac{2}{\lambda z}}x\) from the origin to some central \(x\)-dependent point on the spiral, and then extend around it by the \(x\)-independent amount \(t’_1-t’_2=\sqrt{\frac{2}{\lambda z}}\Delta x\sim\sqrt{F(z)}\) to get a corresponding line segment whose length will be \(|\psi(x)|\).
Diffraction Through a Circular Aperture
Given a circular aperture of radius \(R\), its \(2\)D isotropic nature means that the Fraunhofer interference pattern is just proportional to the Hankel transform of the aperture:
with \(k_{\rho}=k\sin\theta\). This is sometimes called a sombrero or \(\text{jinc}\) function, being the polar analog of the \(\text{sinc}\) function. It has its first zero at \(k_{\rho}R\approx 3.8317\) which defines the boundary of the Airy disk (cf. \(\text{sinc}(k_x\Delta x/2)\) having its first zero at \(k_x\Delta x/2=\pi\approx 3.1415\)). This is often expressed paraxially via the angular radius of the Airy disk \(\theta_{\text{Airy}}\approx 1.22\frac{\lambda}{D}\) with \(D=2R\) the diameter of the aperture (cf. \(\theta_{\text{central max}}\approx\frac{\lambda}{\Delta x}\) for the single slit).
For the same circular aperture setup, one can also ask what happens in the Fresnel regime \(z≲z_R\). In general, the integral is complicated:
This is essentially the topologist’s favorite pathological sine function, but of course it was already mentioned that this solution is only reliable when the argument \(kR^2/4z\sim F(z)≳1\). For this specific on-axis case, it turns out one can significantly relax the paraxial assumption, namely, although one still assumes the obliquity kernel \(K(\textbf x-\textbf x’)\approx 1\), otherwise one acknowledges that \(r^2=\rho’^2+z^2\):
where if one were to binomial expand \(\sqrt{R^2+z^2}\approx z+R^2/2z\) one would just recover the Fresnel solution. If one fixes a given on-axis observation distance \(z\) and instead views \(|\psi(0,z)|^2\) as a function of the aperture radius \(R\) (and not \(z\)), then clearly it alternates between bright maxima and dark minima at aperture radii \(R\equiv\rho’_n\) given by:
Thus, in general for a fixed aperture radius \(R\), there will be \(\sim F(z)\) concentric annuli of the form \(\rho’\in[\rho’_{n-1},\rho’_n]\) that can be made to partition the aperture disk; the annulus \([\rho’_{n-1},\rho’_n]\) is called the \(n\)-thFresnel half-period zone. Note that the area of each Fresnel half-period zone is a constant \(\pi\lambda z\) in the Fresnel regime thus providing equal but alternating contributions to \(\psi(0,z)\) that lead to the observed oscillatory behavior in \(|\psi(0,z)|^2\). The existence of this Fresnel half-period zone structure motivates the construction of Fresnel zone plates which are vaguely like polar analogs of diffraction gratings, except rather than being regularly spaced, they block alternate Fresnel half-period zones with some opaque material to reinforce constructive interference for a given \(z\) and \(\lambda\) (thus, in order to design such a zone plate, one has to already have in mind a \(z\) and a \(\lambda\) ahead of time in order to compute the radii \(\rho’_n\approx\sqrt{n\lambda z}\) to be etched out). If a given \((z,\lambda)\) zone plate has already been constructed, but one then proceeds to move \(z\mapsto z/m\) for some \(m\in\textbf Z^+\), then in each of the Fresnel half-period zones associated to \(z\), there would now be \(m\) Fresnel half-period zones associated to \(z/m\), and so each transparent region of the \((z,\lambda)\) zone plate would allow through \(m\) of the \(z/m\) Fresnel half-period zones. Thus, if \(m\in 2\textbf Z^+-1\) is odd, then one would still expect a net constructive interference on-axis at \(z/m\), whereas if \(m\in 2\textbf Z^+\) then destructive interference of pairs of adjacent Fresnel half-period zones wins out (this parity argument is easy to remember because if \(m=1\) then nothing happens and the whole point of constructing the zone plate was to amplify the constructive interference at \(z\)). Finally, it is sometimes said that a given \((z,\lambda)\) Fresnel zone plate acts like a lens of focal length \(z\); however, due to the dependence on \(\lambda\), such a lens suffers from chromatic aberration.
Now consider the complimentary problem of calculating \(\tilde{\psi}(0,z)\) for a circular obstruction of radius \(R\), rather than a circular aperture of radius \(R\) carved into an infinitely-extending obstruction. Clearly, the two are related by subtracting the solution \(\psi(0,z)\) of the circular aperture of radius \(R\) from the free, unobstructed plane wave \(e^{ikz}\); this obvious corollary of the linearity of the scalar Helmholtz equation is an instance of Babinet’s principle. This leads to the counterintuitive prediction of Poisson’s spot (also called Arago’s spot):
where, taking into account the obliquity kernel \(K(\textbf x-\textbf x’)\), this holds as long as one doesn’t wander too close to the aperture. Note from the earlier case of the circular aperture that there was the complementary (and equally counterintuitive) prediction that one could get a dark on-axis spot at certain \(z\) (i.e. those for which the aperture \(R=\rho’_{2m}\) partitions into an even number \(2m\) of Fresnel half-period zones as evident from the formula \(|\psi(0,z)|^2\sim\sin^2k(\sqrt{R^2+z^2}-z)/2\)).
Talk about how, by working with a scalar \(\psi\), have basically neglected polarization which only comes about from the vectorial nature of the electromagnetic field; forms basis of so-called scalar wave theory or scalar diffraction theory. Connect all this to the Lippman-Schwinger equation in quantum mechanical scattering theory (specifically, this is basically the first-order Born approximation solution to LS equation).
Fraunhofer is taxicab, treats wavefronts as planar, Fresnel is \(\ell^2\), actually considers their curvature. Actually, anywhere that Fraunhofer works, Fresnel also works.
Geometrical Optics
Consider a spherical glass of index \(n’\) and radius \(R>0\) placed in a background of index \(n\), and a paraxial light ray incident at angle \(\theta\) and distance \(\rho\) (where both \(\theta\) and \(\rho\) are measured with respect to some suitable choice of principal \(z\)-axis):
The incident angle of the yellow ray is \(\theta+\rho/R\) while its refracted angle is \(\theta’+\rho/R\) so Snell’s law asserts (paraxially) that:
There is no need to memorize such a matrix; instead, because it is \(2\times 2\), it can always be quickly rederived by finding two linearly independent vectors on which the action of such a matrix is physically obvious. The natural choice are its eigenvectors, which correspond physically to the following two “eigenrays”:
In the limit \(R\to\infty\) of a flat interface (e.g. in a plano-convex lens), the paraxial ray transfer matrix reduces to the diagonal matrix \(\begin{pmatrix}1&0\\ 0&n/n’\end{pmatrix}\).
Welding two such spherical glasses (of the same index \(n’\) and radii \(R>0,R'<0\) in the usual Cartesian sign convention) together back-to-back and assuming the usual thin-lens approximation (otherwise one would also need to include a propagation ray transfer matrix \(\begin{pmatrix}1&\Delta z\\ 0&1\end{pmatrix}\) if the thickness \(\Delta z>0\) were non-negligible), one obtains the paraxial ray transfer matrix of a thin convex lens (indeed any thin lens):
As a check of this formula’s self-consistency, consider the special case of a thin plano-convex lens (where the convex side has radius \(R>0\)). According to the lensmaker’s equation, this should have focal length \(1/f=(n’-n)/nR\). On the other hand, if one were to flip this plano-convex lens around (and call the radius of the convex side \(R'<0\)), then the lensmaker’s formula says it should now have focal length \(1/f’=-(n’-n)/nR’\). But, since both plano-convex lenses are thin, if one simply puts them next to each other then one would reform the thin convex lens as before, with effective focal length:
In other words, the optical powers are additive \(P_{\text{eff}}=P+P’\). Note that this holds for any \(2\) thin lenses placed next to each other to form an “effective lens”, not just for the example of \(2\) plano-convex lenses given above. More generally, because the group of shears on \(\textbf R^2\) along a given direction is isomorphic to the additive abelian group \(\textbf R\), it holds for any \(N\in\textbf Z^+\) thin lenses arranged in an arbitrary order:
It’s worth quickly clarifying why these are even thin lenses in the first place and why the claimed \(f\) really is a suitable notion of focal length. One can proceed axiomatically, demanding that a thin lens be any optical element which:
Focuses all incident light rays parallel to the principal \(z\)-axis to a focal point \(f\) (i.e. \((\rho,0)\mapsto(\rho,-\rho/f)\)).
Doesn’t affect any light rays that pass through the principal \(z\)-axis (i.e. \((0,\theta)\mapsto(0,\theta)\)).
These are two linearly independent vectors (though only the latter is an eigenvector, as shear transformations are famous for being non-diagonalizable), so these \(2\) axioms are sufficient to fix the form \(\begin{pmatrix}1&0\\-1/f&1\end{pmatrix}\) of the paraxial ray transfer matrix of a thin lens.
Often, one would like to use thin lenses to image various objects. Consider an arbitrary point in space sitting a distance \(\rho\) above the the principal \(z\)-axis and a distance \(z>0\) away from a thin lens. If a light ray is emitted from this point at some angle \(\theta\), and this light ray refracts through the thin lens (of focal length \(f\)), and ends up at some point \(\rho’,z’\) after the lens during its trajectory, then one has:
where the composition of those \(3\) matrices evaluates to \(\begin{pmatrix}1-z’/f&z+z’-zz’/f\\-1/f&1-z/f\end{pmatrix}\). But this has an important corollary; if one were to specifically choose the distance \(z’>0\) such as to make the top-right entry vanish \(z+z’-zz’/f=0\iff f^2=(z-f)(z’-f)\iff 1/f=1/z+1/z’\), then \(\rho’=(1-z’/f)\rho=\rho/(1-z/f)\) would be independent of \(\theta\)! The condition \(1/f=1/z+1/z’\) is sometimes called the (Gaussian) thin lens equation, though a better name would simply be the imaging condition. The corresponding linear transverse magnification is \(M_{\rho}:=\rho’/\rho=-z’/z=1/(1-z/f)\). One sometimes also sees the linear longitudinal magnification \(M_{z}:=\partial z’/\partial z=-1/(1-z/f)^2=-M_{\rho}^2<0\) which is always negative.
A magnifying glass works by placing an object at \(z\approx f\) so as to form a virtual image at a distance \(z’\to -\infty\). In that case, both \(M_{\rho},M_z\to\infty\) exhibit poles at \(z=f\), so what does it mean when a company advertises a magnifying glass as offering e.g. \(\times 40\) magnification? It turns out this is actually a specification of the angular magnification \(M_{\theta}:=\theta’/\theta=(\rho/f)/(\rho/d)=d/f\) of the convex lens when viewed at a distance \(d=25\text{ cm}\) from the object (not from the lens). So the statement that \(M_{\theta}=40\) is really a statement about the focal length \(f=d/M_{\theta}=0.625\text{ cm}\) of the lens. In turn, if the glass of the magnifier has a typical index such as \(n=1.5\) and is intended to be symmetric, then the lensmaker’s equation requires one to use \(R=-R’=0.625\text{ cm}\) in air (coincidentally the same as \(f\)).
The set of paraxial rays \((\rho,\theta)\) constitute a real, \(2\)-dimensional vector space on which optical elements such as lenses act by linear transformations. For instance, a collection of parallel rays incident on the lens (represented by the horizontal line below) is first sheared vertically by the lens, and subsequently free space propagation by a distance \(f\) shears the resultant line horizontally to the point that it becomes vertical, indicating that all the parallel rays have been focused to the same point (thus, this is an instance of the general identity \(\arctan f+\arctan 1/f=\text{sgn}(f)\pi/2\) for arbitrary \(f\in\textbf R-\{0\}\)).
An important corollary of this is that, if one wishes to observe the Fraunhofer interference pattern of some aperture at any distance \(f\) of interest, not necessarily just in the far-field \(f\gg z_R\), a simple way to achieve this is to just place a thin convex lens of focal length \(f\) into the aperture. Recalling that the Fraunhofer interference pattern arises by the superposition of (essentially) parallel Huygens wavelet contributions from each point on the aperture (parallel because one is working in the far-field), and recalling that a lens focuses all incident parallel rays onto a given point in its back focal plane \(f\), this provides a geometrical optics way of seeing why one can form the Fraunhofer interference pattern at an arbitrary distance \(f\) simply by choosing a suitable convex lens.
There is also a more physical optics way of seeing the same result. Recall that, at the end of the day, a lens is just two spherical caps of radii \(R,R’\) that have been welded together. In Cartesian coordinates, the equations of such caps are \(z=-\sqrt{R^2-x’^2-y’^2}\) and \(z=\sqrt{R’^2-x’^2-y’^2}\), but in the paraxial approximation, these look like the paraboloids \(z\approx -R+(x’^2+y’^2)/2R\) and \(z\approx -R’+(x’^2+y’^2)/2R’\) (where \(R'<0\) for a convex lens, etc.). Here, despite using a “thin” lens approximation, one cannot completely ignore the thickness profile across the lens (also a paraboloid):
It is important to understand that \(k\) here is the free space wavenumber, but that in a medium \(n\) it becomes \(k\mapsto nk\) because \(\omega=ck=vnk\) is fixed. This corresponds to a spatially-varying \(U(1)\) modulation of the aperture field:
where the lensmaker’s equation has been used. But notice that, when inserted into the Fresnel diffraction integral (with \(k\mapsto nk\) and hence \(\lambda\mapsto\lambda/n\)), if one places the screen exactly at \(z=f\), then the quadratic phase terms cancel out and one is left with precisely the Fraunhofer interference pattern:
More generally, for an arbitrary optical component with ray transfer matrix \(\begin{pmatrix}A&B\\C&D\end{pmatrix}\) in the geometrical optics picture, its corresponding operator in the physical optics picture is \(e^{}\).
Problem: Show that \(2\) closely-spaced wavenumbers \(k,k’\) separated by \(\Delta k:=|k-k’|\) can be resolved by a diffraction grating with \(N\) slits at the \(m^{\text{th}}\)-order iff:
\[\frac{\Delta k}{k}\geq \frac{1}{mN}\]
(note the spectral resolution can also be expressed in terms of wavelengths \(\Delta k/k=\Delta\lambda/\lambda\) because \(k\lambda=2\pi\)).
Solution: Although not relevant to the final result, it is useful to introduce the length \(L\) of the entire grating as well as the separation \(d\) between adjacent slits (each considered infinitely thin) so that \(L=Nd\).
Now, for all wavenumbers \(k\), the Fraunhofer interference pattern in \(k_x\)-space looks the same:
The maxima only appear to split in \(\sin\theta\)-space, where more precisely the \(m^{\text{th}}\)-order maxima of the \(2\) wavenumbers \(k,k’\) are now separated by:
And the Rayleigh criterion considers these maxima to be resolved iff \(\Delta\sin\theta\) is at least the width (again in \(\sin\theta\)-space) of any one of the maxima \(\frac{2\pi}{kL}=\frac{\lambda}{L}\). So:
Lenses, sign conventions (basically, one key point is that an optical element which tends to converge light rays has positive focal length).
Real objects are a source of rays, real images are a sink of rays, virtual images are a source of rays (probably all of this can be made precise in the eikonal approximation).
Gaussian optics as paraxial geometrical optics
No notion of \(\lambda\) (can view geometrical optics as the \(\lambda\to 0\) limit of physical optics)
Ray tracing algorithms.
Spherical aberration
Chromatic aberration due to optical dispersion \(n(\lambda)\) as the only time where \(\lambda\) shows up, sort of ad hoc.
(keeping in mind though that there many variants on this simple Ising model).
Problem #\(2\): Is the Ising model classical or quantum mechanical?
Solution #\(2\): It is purely classical. Indeed, this is a very common misconception, because many of the words that get tossed around when discussing the Ising model (e.g. “spins” on a lattice, “Hamiltonian”, “(anti)ferromagnetism”, etc.) sound like they are quantum mechanical concepts, and indeed they are but the Ising model by itself is a purely classicalmathematical model that a priori need not have any connection to physics (and certainly not to quantum mechanical systems; that being said it’s still useful for intuition to speak about it as if it were a toy model of a ferromagnet).
To hit this point home, remember that the Hamiltonian \(H\) is just a function on phase space in classical mechanics, whereas it is an operator in quantum mechanics…but in the formula for \(H\) in Solution #\(1\), there are no operators on the RHS, the \(\sigma_i\in\{-1,1\}\) are just some numbers which specify the classical microstate \((\sigma_1,\sigma_2,…)\) of the system, so it is much more similar to just a classical (as opposed to quantum) Hamiltonian. And there are no superpositions of states, or non-commuting operators, or any other quantum voodoo going on. So, despite the discreteness/quantization which is built into the Ising model, it is purely classical.
Problem #\(3\): What does it mean to “solve” the Ising model? (i.e. what properties of the Ising lattice is one interested in understanding?)
Solution #\(3\): The mental picture one should have in mind is that of coupling the Ising lattice with a heat bath at some temperature \(T\), and then ask how the order parameter \(m\) of the lattice (in this case the Boltzmann-averaged mean magnetization) varies with the choice of heat bath temperature \(T\). Intuitively, one should already have a qualitative sense of the answer:
So to “solve” the Ising model just means to quantitatively get the equation of those curves \(m=m(T)\) for all possible combinations of parameters \(E_{\text{int}},E_{\text{ext}}\in\textbf R\) in the Ising Hamiltonian \(H\).
Problem #\(4\):
Solution #\(4\):
Problem #\(5\):
Solution #\(5\):
Comparing with the earlier intuitive sketch (note all the inner loop branches at low temperature are unstable):
In particular, the phase transition at \(E_{\text{ext}}=0\) is manifest by the trifurcation at the critical point \(\beta=\beta_c\).
Problem #\(6\):
Solution #\(6\):
Problem #\(7\): Show that in the mean field approximation, the short-range Ising model at \(E_{\text{ext}}=0\) experiences a \(2\)-nd order phase transition in the equilibrium magnetization \(m_*(T)\) for \(T<T_c\) (but \(T\) close to \(T_c\)) goes like \(m_*(T)\approx\pm\sqrt{3(T_c/T-1)}\).
Solution #\(7\): Within (stupid!) mean-field theory, the effective free energy is:
So anyways, Maclaurin-expanding the mean-field effective free energy \(f(m)\) per unit spin:
The spontaneous \(\textbf Z_2\) symmetry breaking (i.e. ground state not preserved!) associated to the \(T<T_c\) ordered phase at \(E_{\text{ext}}=0\) is apparent:
Problem #\(8\): In the Ehrenfest classification of phase transitions, an \(N\)-th order phase transition occurs when the \(N\)-th derivative of the free energy \(\frac{\partial^N F}{\partial m^N}\) is discontinuous at some critical value of the order parameter \(m_*\). But considering that \(F=-k_BT\ln Z\) and the partition function \(Z=\sum_{\{\sigma_i\}}e^{-\beta E_{\{\sigma_i\}}}\) is a sum of \(2^N\) analytic exponentials, how can phase transitions be possible?
Solution #\(8\): By analogy, consider the Fourier series for a certain square wave:
\[f(t)=\frac{4}{\pi}\sum_{n=1,3,5,…}^{\infty}\frac{\sin(2\pi n t/T)}{n}\]
Although each sinusoid in the Fourier series is everywhere analytic, the series converges in \(L^2\) norm to a limiting square wave which has discontinuities at \(t_m=mT\), hence not being analytic at those points! So the catch here is that while any finite series of analytic functions (e.g. a partial sum truncation) will have its analyticity preserved, an infinite series need not! This simple result of analysis underpins the existence of phase transitions! In practice of course, for any finite number of spins \(N<\infty\), \(2^N\) will still be finite and in fact there are strictly speaking no phase transitions in any finite system. But in practice \(N\sim 10^{23}\) is so large that it is effectively infinite, and so in this \(N=\infty\) system limit it looks to all intents and purposes as a phase transition.
Similar to the phase transitions, spontaneous symmetry breaking is also an \(N=\infty\) phenomenon only, strictly speaking:
where the limits do not commute \(\lim_{E_{\text{ext}}\to 0}\lim_{N\to\infty}\neq\lim_{N\to\infty}\lim_{E_{\text{ext}}\to 0}\) because for any finite \(N<\infty\), \(\langle m\rangle_N=-\frac{1}{N}\frac{\partial F_H}{\partial E_{\text{ext}}}|_{E_{\text{ext}}=0}=0\) since \(\textbf Z_2\) symmetry enforces \(F_H(E_{\text{ext}})=F_H(-E_{\text{ext}})\) so that its derivative must be odd and therefore vanishing at the origin.
Problem #\(9\): Show that, in the mean-field short-range Ising model at \(E_{\text{ext}}=0\) the specific/intensive heat capacity \(c\) is discontinuous at \(T=T_c\).
Solution #\(9\):
Appendix: Physical Systems Described by Classical Ising Statistics
The purpose of this post is to dive into the intricacies of the classicalIsing model. For this, it is useful to imagine the Bravais lattice \(\textbf Z^d\) in \(d\)-dimensions of lattice parameter \(a\), together with a large number \(N\) of neutral spin \(s=1/2\) fermions (e.g. neutrons, ignoring the fact that isolated neutrons are unstable) tightly bound to the lattice sites \(\textbf x\in\textbf Z^d\), each site accommodating at most one fermion by the Pauli exclusion principle. On top of all this, apply a uniform external magnetic field \(\textbf B_{\text{ext}}\) across the entire sample of \(N\) fermions. Physically then, ignoring any kinetic energy or hopping/tunneling between lattice sites (cf. Fermi-Hubbard model), there are two forms of potential energy that contribute to the total Hamiltonian \(H\) of this lattice of spins:
Each of these \(N\) neutral fermions has a magnetic dipole moment \(\boldsymbol{\mu}_{\textbf S}=\gamma_{\textbf S}\textbf S\) arising from its spin angular momentum \(\textbf S\) (in the case of charged fermions such as electrons \(e^-\), this is just the usual \(\gamma_{\textbf S}=-g_{\textbf S}\mu_B/\hbar\) but for neutrons the origin of such a magnetic dipole moment is more subtle, ultimately arising from its quark structure). This magnetic dipole moment \(\boldsymbol{\mu}_{\textbf S}\) couples with the external magnetic field \(\textbf B_{\text{ext}}\), leading to an interaction energy of the form:
Thus, the total Hamiltonian \(H\) on this state space \(\mathcal H\cong\textbf Z^d\otimes\textbf C^{\otimes 2N}\) is:
\[H=V_{\text{int}}+V_{\text{ext}}\]
Right now, it is hopelessly complicated. From this point onward, a sequence of dubious approximations will be applied to transform this current Hamiltonian \(H\mapsto H_{\text{Ising}}\) to the Ising Hamiltonian \(H_{\text{Ising}}\) (in fact, as mentioned, even the apparently complicated form of the Hamiltonian \(H\) is already approximate; the reason for using neutral fermions is to avoid dealing with an additional Coulomb repulsion contribution to \(H\)).
Approximation #1: Recall that the direction of the applied magnetic field, say along the \(z\)-axis \(\textbf B_{\text{ext}}=B_{\text{ext}}\hat{\textbf k}\), defines the quantization axis of all the relevant angular momenta. For a sufficiently strong magnetic field \(\textbf B_{\text{ext}}\) (cf. the Paschen-Back effect in atoms), the external coupling \(V_{\text{ext}}\) should dominate the internal coupling \(V_{\text{int}}\) and so all \(N\) spin angular momenta \(\textbf S_i\) will Larmor-precess around \(\textbf B_{\text{ext}}\) with \(m_{s,i}\in\{-1/2,1/2\}\) becoming a good quantum number.
Approximation #2: Assume that only nearest-neighbour dipolar couplings are important (in \(\textbf Z^d\) there would be \(2d\) nearest neighbours) and that moreover, because all the spins are roughly aligned in the direction of \(\textbf B_{\text{ext}}\), the term \((\textbf S_i\cdot\Delta\hat{\textbf x}_{ij})(\textbf S_j\cdot\Delta\hat{\textbf x}_{ij})\) is not as important as the spin-spin coupling term \(\textbf S_i\cdot\textbf S_j\).
Combining these two approximations, one obtains the Ising Hamiltonian \(H_{\text{Ising}}\) acting on the Ising state space \(\mathcal H_{\text{Ising}}\cong\{-1,1\}^N\):
where \(\sigma_i:=2m_{s,i}\in\{-1,1\}\), \(E_{\text{int}}:=\mu_0\hbar^2\gamma_{\textbf S}^2/16\pi a^3\) is a proxy for the interaction strength between adjacent fermions via the energy gain of being mutually spin-aligned and \(E_{\text{ext}}:=\hbar\gamma_{\textbf S}B_{\text{ext}}/2\) is a proxy for the externalfield strength via the energy gain of being spin-aligned with it. In the context of magnetism, a material with \(E_{\text{int}}>0\) would be thought of as a ferromagnet while \(E_{\text{int}}<0\) is called an antiferromagnet (this possibility does not arise however after the various approximations that were made). Similarly, \(E_{\text{ext}}\) can be either positive or negative (e.g. for neutrons it is actually negative \(E_{\text{ext}}<0\) because \(\gamma_{\textbf S}<0\)) but for intuition purposes one can just think \(E_{\text{ext}}>0\) so that being spin-aligned with \(\textbf B_{\text{ext}}\) is the desirable state of affairs.
From \(H_{\text{Ising}}\) to \(Z_{\text{Ising}}\)
As usual, once the Hamiltonian \(H_{\text{Ising}}\) has been found (i.e. once the physics has been specified), the rest is just math. In particular, the usual next task is to calculate its canonical partition function \(Z_{\text{Ising}}=\text{Tr}(e^{-\beta H_{\text{Ising}}})\). The calculation of \(Z_{\text{Ising}}\) can be done exactly in dimension \(d=1\) for arbitrary \(E_{\text{ext}}\) (this is what Ising did in his PhD thesis) and also for \(d=2\) provided the absence of an external magnetic field \(E_{\text{ext}}=0\) (this is due to Onsager). In higher dimensions \(d\gg 1\), as the number \(2d\) of nearest neighbours increases, the accuracy of an approximate method for evaluating \(Z_{\text{Ising}}\) known as mean field theory increases accordingly, becoming an exact solution only for the unphysical \(d=\infty\). It is simplest to first work through the mathematics of the mean field theory approach before looking at the special low-dimensional cases \(d=1,(d=2,E_{\text{ext}}=0)\) (it is worth emphasizing that the Ising model can also be trivially solved in any dimension \(d\) if interactions are simply turned off \(E_{\text{int}}=0\) but this would be utterly missing the whole point of the Ising model!edit: in hindsight, maybe not really after all, see the section below on mean field theory).
First, just from inspecting the Hamiltonian \(H_{\text{Ising}}\) it is clear that the net “magnetization” \(\Sigma:=\sum_{i=1}^N\sigma_i\) is conjugate to \(E_{\text{ext}}\), so in the canonical ensemble it fluctuates around the expectation:
The ensemble-averaged spin is therefore \(\langle\sigma\rangle=\langle\Sigma\rangle/N\). The usual “proper” way to calculate \(\langle\sigma\rangle\) would be to just directly and analytically evaluate the sums in \(Z_{\text{Ising}}=e^{-\beta F_{\text{Ising}}}\) so in particular \(\langle\sigma\rangle\) shouldn’t appear anywhere until one is explicitly calculating for it. However, using mean field theory, it turns out one will end up with an implicit equation for \(\langle\sigma\rangle\) that can nevertheless still be solved in a self-consistent manner.
To begin, write \(\sigma_i=\langle\sigma\rangle+\delta\sigma_i\) (cf. the Reynolds decomposition used to derive the RANS equations in turbulent fluid mechanics). Then the interaction term in \(H_{\text{Ising}}\) (which is both the all-important term but also the one that makes the problem hard) can be written:
Although the variance \(\langle\delta\sigma_i^2\rangle=\langle\sigma_i^2\rangle-\langle\sigma\rangle^2=1-\langle\sigma\rangle^2\) of each individual spin \(\sigma_i\) from the mean background spin \(\langle\sigma\rangle\) is not in general going to be zero (unless of course the entire system is magnetized along or against \(\textbf B_{\text{ext}}\), i.e. \(\langle\sigma\rangle=\pm 1\)), the mean field approximation says that the covariance between distinct neighbouring spins \(\langle i,j\rangle\) should average to \(\sum_{\langle i,j\rangle}\delta\sigma_i\delta\sigma_j\approx 0\), so that, roughly speaking, the overall \(N\times N\) covariance matrix of the spins is not only diagonal but just proportional to the identity \((1-\langle\sigma\rangle^2)1_{N\times N}\).
Thus, reverting back to \(\delta\sigma_i=\sigma_i-\langle\sigma\rangle\) and using for the lattice \(\textbf Z^d\) the identity \(\sum_{\langle i,j\rangle}1\approx Nd\) (because each of \(N\) spins has \(2d\) nearest neighbours but a factor of \(1/2\) is needed to compensate double-counting each bond) and the identity \(\sum_{\langle i,j\rangle}(\sigma_i+\sigma_j)\approx 2d\sum_{i=1}^N\sigma_i\) (just draw a picture), the mean field Ising Hamiltonian \(H’_{\text{Ising}}\) simplifies to:
where the constant \(NdE_{\text{int}}\langle\sigma\rangle^2\) doesn’t affect any of the physics (although it will be kept in the calculations below for clarity) and \(E_{\text{ext}}’=E_{\text{ext}}+2dE_{\text{int}}\langle\sigma\rangle\) is the original energy \(E_{\text{ext}}\) together now with a mean field contribution \(2dE_{\text{int}}\langle\sigma\rangle\). This has a straightforward interpretation; one is still acknowledging that only the \(2d\) nearest neighbouring spins can influence a given spin, but now, rather than each one having its own spin \(\sigma_j\), one is assuming that they all exert the same mean field \(\langle\sigma\rangle\) that permeates the entire Ising lattice \(\textbf Z^d\). Basically, the mean field approximation has removed interactions \(E_{\text{int}}’=0\), reducing the problem to a trivial non-interacting one for which it is straightforward to calculate (this is just repeating the usual steps of calculating, e.g. the Schottky anomaly):
As promised earlier, this is an implicit equation for the average spin \(\langle\sigma\rangle\) that can, for a fixed dimension \(d\), be solved for various values of temperature \(\beta\) (which intuitively wants to randomize the spins) and energies \(E_{\text{int}},E_{\text{ext}}\) (both of which intuitively want to align the spins). The outcome of this competition is the following:
If one applies any kind of external magnetic field \(E_{\text{ext}}\neq 0\), then as one increases the temperature \(1/2d\beta E_{\text{int}}\to\infty\), the mean spin \(\langle\sigma\rangle\to 0\) randomizes gradually (more precisely, as \(\langle\sigma\rangle=\beta E_{\text{ext}}+2dE_{\text{int}}E_{\text{ext}}\beta^2+O_{\beta\to 0}(\beta^3)\)). The surprise though occurs in the absence of any external magnetic field \(E_{\text{ext}}=0\); here, driven solely by mean field interactions, the mean spin \(\langle\sigma\rangle=0\) abruptly vanishes at all temperatures \(T\geq T_c\) exceeding a critical temperature \(kT_c=2dE_{\text{int}}\). This is a second-order ferromagnetic-to-paramagnetic phase transition (it is second order because the discontinuity occurs in the derivative of \(\langle\sigma\rangle\) which itself is already a derivative of the free energy \(F’_{\text{Ising}}\)). Meanwhile, there is also a first-order phase transition given by fixing a subcritical temperature \(T<T_c\) and varying \(E_{\text{ext}}\), as in this case it is the mean magnetization \(\langle\sigma\rangle\) itself that jumps discontinuously.
Note also that, similar to the situation for the Van der Waals equation when one had \(T<T_c\), here it is apparent that at sufficiently low temperatures for arbitrary \(E_{\text{ext}}\), the mean field Ising model predicts \(3\) possible mean magnetizations \(\langle\sigma\rangle\). For \(E_{\text{ext}}=0\), the unmagnetized solution \(\langle\sigma\rangle=0\) solution turns out to be an unstable equilibrium. For \(E_{\text{ext}}>0\), the solution on the top branch with \(\langle\sigma\rangle>0\) aligned with the external magnetic field is stable while among the two solutions with \(\langle\sigma\rangle<0\), one is likewise unstable while one is metastable, and similarly for \(E_{\text{ext}}<0\).
Critical Exponents
Solving The Ising Chain (\(d=1\))Via Transfer Matrices
Low & High-\(T\) Limits of Ising Model in \(d=2\) Dimensions
Talk about Peierls droplet, prove Kramers-Wannier duality between the low and high-\(T\) regimes.
Beyond Ferromagnetism
The point of the Ising model isn’t really to be some kind of accurate model for any real-life physical system, but just a “proof of concept” demonstration that phase transitions can arise from statistical mechanics; although the sum of finitely many analytic functions is analytic, in the thermodynamic limit a phase transition can appear. Similar vein of mathematical modelling can be used to model lattice gases,
The purpose of this post is to study the universal properties of fully developed turbulence \(\text{Re}\gg\text{Re}^*\sim 10^3\). Thanks to direct numerical simulation (DNS), there is strong evidence to suggest that the nonlinear advective term \(\left(\textbf v\cdot\frac{\partial}{\partial\textbf x}\right)\textbf v\) in the Navier-Stokes equations correctly captures turbulent flow in fluids. However, rather than trying to find analytical solutions \(\textbf v(\textbf x,t)\) that exhibit turbulence (which is clearly pretty hopeless), it makes sense to decompose \(\textbf v=\bar{\textbf v}+\delta\textbf v\) into a mean velocity field \(\bar{\textbf v}(\textbf x,t)\) superimposed by some fluctuations \(\delta\textbf v(\textbf x,t)\). The precise meaning of the word “mean” in the phrase “mean velocity field” for \(\bar{\textbf v}\) is time-averaged over some “suitable” period \(T\), also known as Reynolds averaging:
By construction, this implies that the Reynolds time average of the fluctuations vanishes \(\overline{\delta\textbf v}=\overline{\textbf v-\bar{\textbf v}}=\bar{\textbf v}-\bar{\textbf v}=\textbf 0\).
One can also check that \(\textbf v\) is incompressible if and only if both the Reynolds averaged flow \(\bar{\textbf v}\) and the fluctuations \(\delta\textbf v\) are also incompressible. One similarly works with the Reynolds averaged pressure \(p=\bar p+\delta p\) so that by design \(\overline{\delta p}=0\).
Substituting \(\textbf v=\bar{\textbf v}+\delta\textbf v\) and \(p=\bar p+\delta p\) into the Navier-Stokes equations and Reynolds averaging both sides of the equation yields the well-named Reynolds-averaged Navier-Stokes (RANS) equations:
where the Reynolds averaged stress tensor \(\bar{\sigma}\) now includes an additional turbulent contribution \(\bar{\sigma}_{\text{Reynolds}}=-\rho\overline{\delta\textbf v\otimes\delta\textbf v}\) known as the Reynolds stress:
(this can be quickly checked using the incompressibility conditions \(\partial_j\bar v_j=\partial_j\delta v_j=0\)).
At this point, assuming that the external body forces have no fluctuations \(\delta\textbf f=\textbf f-\bar{\textbf f}=\textbf 0\), one can subtract the RANS equations from the original Navier-Stokes equations to obtain:
Taking the outer product of both sides with \(\delta\textbf v\) and then Reynolds averaging yields (to be added: closure problem, Boussinesq approximation as a closure model).
The purpose of this post is to document the uses of several standard components used in optics experiments.
Optical Fibers& APC Connectors
An optical fiber is a waveguide for light waves. The idea is to use it to transmit light over long distances with minimal loss. It consists of an inner core, made of glass or plastic, where total internal reflection can take place within the waveguide (ignoring evanescent transmitted waves) because of a cladding with (higher/lower?) refractive index, and a jacket (blue layer in the picture).
At the ends of optical fibers, one typically also has angled physical contact (APC) connectors to minimize back-reflection of light (by using an angled design usually around \(8^{\circ}\)). These ensure alignment of optical fiber cores when connecting two optical fibers to each other.
Often, optical fibers can be polarization-maintaining(PM) meaning that when one excites a given optical fiber. This is because apparently the core of an optical fiber is typically already pre-stressed to give it some kind of birefringence \(\Delta n=n_{\text{slow}}-n_{\text{fast}}\) (general rule of thumb: any symmetry which is easily broken will be broken; for example the magnetic field is never actually \(\textbf B=\textbf 0\) due to Earth, someone’s phone, etc. and since you don’t want other things to be defining your quantization axis, so you should just apply a magnetic field yourself anyways).
Coupling Laser Light into an Optical Fiber
Goal is to get laser beam to be normally incident \(\theta_x=\theta_y=0\) at the center \(x=y=0\) of an optical fiber. Although initially this sounds quite trivial, as with any waveguide, the optical fiber is extraordinarily sensitive to any small deviations in these \(4\) degrees of freedom \(x,y,\theta_x,\theta_y\) and will only work if these \(4\) conditions are almost perfectly met (hence rendering the task highly non-trivial). Thus, the naive solution of just trying to align the laser beam into the optical fiber “by hand” is hopeless since one’s hands afford merely coarse control over \(x,y,\theta_x,\theta_y\) but clearly here one requires much finer control in order to successfully couple the laser light into the optical fiber.
The way to obtain such fine control is to use mirrors; each mirror comes with fine control in both spherical coordinates \(\phi,\theta\) (and also there is leeway in exactly where the laser is incident on the mirror and the fact that it need not be exactly \(45^{\circ}\) or anything like that). Of course changing the azimuth \(\phi\) of a given mirror will simultaneously change both \(x,\theta_x\) and similarly changing the zenith angle \(\theta\) of a mirror simultaneously affects both \(y,\theta_y\), so in this sense these degrees of freedom are “coupled”. Specifically, each mirror provides for \(2\) degrees of freedom \(\phi,\theta\) which is why in total \(2\) mirrors are actually needed to properly couple the laser into the optical fiber.
One can connect the output end of the optical fiber to a fiber pen and use a translucent polymer sheet to see where the laser beam from the laser intersects the laser beam from the fiber pen at various regions in the setup. From having played around with the setup, it is more sensible to focus on aligning them at the extremes of the path, which tends to automatically ensure that they will be aligned everywhere else in the middle. Moreover, a general rule of thumb turns out to be that in order to align a section, the mirror one should do fine adjustments is, perhaps counterintuitively, the one further away (is there some name for this kind of algorithm?). Doing it iteratively like this will converge onto an aligned optical system; doing it the other way will diverge into a hopelessly misaligned system.
After having completed the “fine structure” alignment of the mirrors properly so that there is for sure some non-zero signal coming out the output of the optical fiber, one can then proceed to a “hyperfine” level of adjustments, putting the output of the optical fiber into a photodiode and measuring the photocurrent developed across a potentiometer \(R\) via a multimeter, or just directly using a power meter. Here again, one essentially seeks to maximize the photodiode signal by an algorithm which vaguely feels like a manual implementation of gradient descent. More precisely, it turns out to be more advisable to make some small random perturbation to the \(\phi\) (resp. \(\theta\)) of the mirror farther away (not necessarily physically, but in the sense of the optical path length) from the input of the optical fiber, then adjusting \(\phi\) (resp. \(\theta\)) of the mirror closer to the optical fiber input until the signal is locally maximized, and repeating this until one eventually converges onto not merely a local, but global maximum (2D search). Finally, also consider the focal length of the lens relative to the fiber (this is a 1D search at the end). At this point, one can feel pretty confident that the laser light is properly coupled into the optical fiber, i.e. that \(x\approx y\approx\theta_x\approx\theta_y\approx 0\). Each time one takes a fiber out and puts it back in again, one has to recouple because of how sensitive the whole alignment is.
Optical Tables & Breadboards
Small vibrations (e.g. footsteps, motors, etc.) can perturb the delicate alignment of optical systems, hence all optical components need to be firmly bolted down to an optical table (possibly with the aid of ferromagnetic bases). The top and bottom layers of an optical table are usually manufactured from some grade of stainless steel perforated by a square lattice of \(\text{M}6\) threaded holes with lattice parameter \(\Delta x=25\text{ mm}\) (recall that \(\text{M}D\times L\) is the standard notation for a metric thread of outer diameter \(D\text{ mm}\) and length \(L\text{ mm}\) and typically one assumes the thread pitch \(\delta\text{ mm}\) is the coarsest/largest one that is standardized for that particular thread diameter \(D\) so that the helix winds \(N=L/\delta\) times around, although \(\delta\) could be finer/smaller too, see this reference). The exact engineering details of how an optical table seeks to critically damp external vibrations is interesting, involving the use of pneumatic legs and several layers of viscoelastic materials sandwiched between the steel layers in a rigid honeycomb structure.
Optical breadboards are basically just smaller, less fancy version of an optical table, mainly used for prototyping and easier portability of a particular modular setup into some main optical table.
Acousto-Optic Modulators (AOMs)
An acousto-optic modulator (AOM), also known as an acousto-optic deflector (AOD), is at first glance similar to a diffraction grating for light in the sense that if one shines some incident plane wave from a laser through the hole in the AOM, then out comes an \(m=0\) order mode in addition to \(m=\pm 1\) and occasionally higher-order modes too (the exact distribution of intensities among these harmonics will depend very sensitively on the incident angle that one shines the laser light at into the AOM).
110 MHz AOMBragg Diffraction PatternRF Driver for AOMAmplifier Box
However, despite being superficially similar to a diffraction grating, there are some notable differences; the first is that the Fraunhofer interference pattern of a diffraction grating typically occurs via a (\(2\)-dimensional) screen with a bunch of slits on it; here a (\(3\)-dimensional!) volume Bragg grating (VBG) is used instead, which in practice means some kind of glass attached to a piezoelectric transducer that drives the glass (i.e. applies periodic stress to it) at some radio frequency \(f_{\text{ext}}\sim 100\text{ MHz}\) via an external RF driver. This induces a periodic modulation in the glass’s refractive index \(n=n(x)\) where the “period” \(\lambda_{\text{ext}}=c_{\text{glass}}/f_{\text{ext}}\) over which \(n(x+\lambda_{\text{ext}})=n(x)\) corresponds to the wavelength of the sound waves, where \(c_{\text{glass}}\) is the phase velocity of sound waves in the glass.
Provided the light is incident at the Bragg angle \(\theta_B\approx\sin\theta_B\approx 2\lambda/\lambda_{\text{ext}}\), then one has an effective crystal with interplanar spacing \(\lambda_{\text{ext}}\) and so the Bragg condition yields the angular positions of the constructive maxima of the Brillouin scattering:
\[2\lambda_{\text{ext}}\sin\theta_m=m\lambda\]
In addition, whereas for ordinary light incident on a diffraction grating the wavelength and frequency don’t change after diffraction, here because the photons either absorb or emit a phonon quasiparticle (respectively \(m=\pm 1\) orders), they do also accrue a slight Doppler shift in the frequency. When an AOM is labelled as being \(110\text{ MHz}\) for instance, it does not mean that the only Doppler shifts it is able to provide are exactly \(\pm 110\text{ MHz}\) but rather the diffraction efficiency \(\eta\) is greatest at this frequency, with some FWHM bandwidth \(\delta f_{\text{ext}}\) around this. For instance, for \(2\) AOMs in the lab, the following frequency response efficiency curves were measured (for both single pass and double pass, the latter of which should roughly be the square of the former).
AOMs are commonly used in a double-pass configuration, which means that light is passed through, then passed back again in exactly along the trajectory it came. If the diffraction efficiency of the first-order is \(\eta(\omega)<1\) at some frequency \(\omega=2\pi f\) ideally around the central \(\omega\) of the AOM (e.g. \(\omega=2\pi\times 110\text{ MHz}\)), then double-passing will lead to a reduced frequency \(\eta^2(\omega)<\eta(\omega)\). Provided one picks out the right order (not always trivial to do, need to change the driving amplitude to see which order drops faster, and use geometrical ray optics arguments), then this allows accruing a Doppler shift of \(2f_{\text{ext}}\) without sacrificing too much efficiency (if tried to get this from the \(m=2\) mode on a single-pass, would lose a lot of efficiency). AOMs are also commonly used for Q-switching in lasers (i.e. as glorified switches b/c they can switch on nanosecond time scales).
Laser (Toptica) with massive DLC Pro driver? Talk about how lasers work + lasing requirements
Notes on how Zoran’s lab works:
The UHV in the MOT and science cells are like \(10^{-11},10^{-13}\text{ mbar}\) respectively, measured by a current which is on the order of \(\text{nA}\) (but at such low pressures, with such few particles, one can argue that pressure fails to even be a well-defined quantity).
There are \(4\) AOM drivers for D1 cooling/repump and D2 cooling/repump light. Each has frequency, TTL, and amplitude control which need to be connected to analog channels like AO1, AO2, etc. which in turn are controlled in Cicero.
Laser goggles have certain wavelength ranges over which they block best. The ODT uses 767 nm red light, but the box trap uses 532 nm green light.
The Toptica laser controller is one component of a PID control loop.
First, saturated absorption spectroscopy (require heat b/c K-39 to be in a gaseous form b/c otherwise just K-39 liquid/solid sitting at the bottom of the tube; this is achieved by winding some coils around and passing large current through coils and relying on resultant Joule heating; for K-39 need around human body temperature? \(35-40^{\circ}\text{ C}\) (the double-pass thing in the absorption cell) is used to get a Doppler-free \(\lambda_{D1},\lambda_{D2}\) signals that are fed to photodiodes, which send this to the Toptica laser controller which sends it to the Toptica software that’s used for laser locking.
Need to lock the laser b/c a piezoelectric crystal has some voltage applied to it that causes mechanical deformation, move distance b/w 2 mirrors, but overtime it can drift due to temperature fluctuations, etc.
The photodiodes need to be powered (by old car battery in this case) and also a separate cable which feeds into Toptica laser controller (it is also this cable which has the extra resistor at its end…I think idea is that the photodiode converts absorption signal into a photocurrent that flows across the resistor, and gets converted into a voltage…note that it’s a BNC cable, and most BNC cables already have some internal resistance, so this resistor really is just an extra resistor which I guess is to decrease the “gain” in some sense?).
Kibble-Zurek mechanism?
Anything in the lab (e.g. PCs, soldering irons, vacuum pumps, all kettle plugs, etc.) connected to AC mains needs to be PAT tested.
There are \(4\) sets of coils in the experiment. In chronological order of use, they are:
Quadrupole field coils (both \(x\),\(y\) and \(z\)) for the MOT and magnetic trapping.
Guide field coils (to impose a quantization axis?) on MOT side for pumping and on the imaging side.
Feshbach (“Fesh”) field coils for the science cell (to exploit Feshbach resonance of hyperfine states in order to tune s-wave scattering length).
Compensation coils in \(x,y,z\) (the \(z\) compensation coil is also called “anti-\(g\)” coil for obvious reasons).
Speedy coils? For quantum quench experiments?
One of the coils cancels the curvature in the Feshbach coils.
Each of these coils obviously requires a very bulky power supply.
Igor’s thesis should contain more information about the coils.
The track (arm which moves the magnetically trapped atoms) has \(3\) states, START, MOVE, MOVE2, and ENERGIZE? There is a track control box connected to the analog channels which one can use to control how the track moves in Cicero during an experimental sequence.
Regarding water cooling of the experiment, the water is already pressurized, so adding a pump would only slow it down?
The pipes also contain flow meters which monitor the flow rate \(|\textbf v|\) of the water (not sure how?), and send this information to a logic circuit which also uses temperature control. Will suddenly stop all current flowing through Feshbach coils if it detects that some thresholds are breached on both; thus, behaves as a current-controlled switch, aka a transistor, and more precisely they are IGBTs (insulated-gate bipolar transistor) because it turns out only these transistors are rated for the kinds of currents being used here.
For all the coils, one frequently would like to switch them off suddenly. If you just do this directly, the significant inductance \(L\) of the coils will lead to a substantial back emf that would destroy the PSU. Hence the need for an alternative path for current to flow, which is why we also have a capacitor in parallel?
Apparently, the light inside an optical fiber can also heat the fiber enough to melt it…
There can be up to \(I\sim 200\text{ A}\) of current flowing through the Feshbach coils, with \(V=400\text{ V}\)…the whole circuit is low-resistance so if you touch probably not lethal but still better to be safe. The
The D1, D2 cooling and repump light must first get the required frequency shifts, then it all gets coupled simultaneously into a TA (amplifier) which should be seeded at all times, is externally controlled by a current knob \(I\) that dictates how much amplification \(A=A(I)\) it gives to the laser power. This is all then coupled into a polarization-maintaining optical fiber that goes into an optical fiber port cluster (FPC) (see the ChatGPT blurb about it) which is basically a compact setup of mirrors/lenses/polarizing beamsplitters (Chris says conceptually it’s not hard to build one yourself, just that save time with a company at the cost of double the price cf. self-building; similar remarks even apply to e.g. a laser which can be self-built and indeed many labs do that, just takes time). This then takes the incident light from the fiber and redistributes it into \(6\) beams of roughly equal power for the MOT (i.e. the “O” in “MOT”).
The MOT loading time \(\Delta t_{\text{load}}\) is the time to load the MOT from the vapor of K-39 atoms that sits at some background pressure \(p_o\) and temperature \(T_0\). Some exponential “charging curve” \(1-e^{-t/t_{\text{load}}}\)? And also, normally you gauge how well the MOT is working (and decide when need to fire again) by measuring atom number in the BEC in the science cell. If the science cell isn’t working, what you can instead do is to measure an initial \(I_0\) from absorption spectroscopy, then do magnetic transport of the atoms to the science cell and back to the MOT, and measure \(I\); then the recapture efficiency of the MOT is \(I/I_0\).
Also in science cell, one-body losses are very significant. Relative to the BEC, the thermal cloud around it is at effectively infinite energy heat bath, so if any such atom collides with an atom in the BEC, it will remove it…(I guess thermalization is always happening, and at the microscopic/kinetic level what this looks like is precisely one-body losses).
One very effective practice/way to learn more about how any lab with a bunch of cables/wires works is to just trace/route wires, one at a time, to gain some sense for how different components are connected to each other.
General EQ Stuff
If you’re building a new machine/experiment, need to make the shop ppl’s life “living hell”, ask about stock available and be persistent, ask “can you get it to me by tomorrow”, etc. and don’t leave it to the point that they have to reach out to some more senior ppl etc, then stuff will never get done. Example in this case was for boards to enclose the perimeter of the optical table with, some were not right size so were looking for companies to get new ones from. Simon found a company and even more quickly found that they had a contact, so he just called them right away and got the order sorted out very efficiently.
In cold atom experiments, one very basic question one can ask is, given some atom cloud, what is the number of atoms \(N\) in the cloud? One way is to basically shine some light on the atom cloud and see how much is absorbed. This absorption effect is quantified by the Beer-Lambert law.
\[I(z)=I(0)e^{-n\sigma z}\]
where \(n=N/V\) is the number density of atoms in the cloud of volume \(V\) and \(\sigma=\sigma(\omega_{\text{ext}})\) is the optical absorption cross-section presented by each atom in the cloud to incident monochromatic light of frequency \(\omega_{\text{ext}}\).
It is instructive to derive the Beer-Lambert law from first principles. In particular, the derivation is meant to emphasize that, for the most part, one can basically just think of the Beer-Lambert law as a mathematical theorem about probabilities, with some quantum mechanical asterisks to that statement. To get a sense of this, consider first a \(2\)D version of the Beer-Lambert law, in which one has an atom cloud confined to a plane, along with an incident beam of photons of frequency \(\omega_{\text{ext}}\) travelling along the (arbitrarily defined) \(z\)-direction.
The (average) number density of atoms is \(n\) (units: \(\text{atoms}/\text m^2\)) and each atom can be thought of as a “hard circle” with diameter \(\sigma\) (units: \(\text m/\text{atom}\)). In that case, in a small strip of width \(dz\), there will be \(ndz\) atoms per unit length along the strip, or equivalently the average interatomic spacing is \(1/ndz\) along the strip (see the picture). The probability that a given photon “collides” with such an atom is therefore \(\sigma/(1/ndz)=n\sigma dz\); such photons are depicted red on the diagram, while those that make it through the first layer \(dz\) are depicted green. Over many photons, this manifests as a loss \(dI<0\) in their collective intensity \(I\) across the layer \(dz\), so one may equate the fractional loss of intensity with the absorption probability:
\[\frac{dI}{I}=-n\sigma dz\]
for which the solution of this ODE yields the Beer-Lambert law:
\[I(z)=I(0)e^{-n\sigma z}\]
where \(1/n\sigma\) is the length scale of this exponential attenuation in the beam intensity. Of course, this argument generalizes readily to the \(3\)D case where now \(n\) (units: \(\text{atoms}/\text m^3\)) is the number density of atoms in \(\textbf R^3\) and \(\sigma\) (units: \(\text m^2/\text{atom}\)) is now the optical cross-section presented by each atom. As stressed earlier, there isn’t really much physics going on here, it’s just a statement about the statistics of a \(3\)D Galton board.
At this point however, one would like to introduce some quantum mechanical modifications to this simple Beer-Lambert law. As usual, suppose the laser light \(\omega_{\text{ext}}\) is not too detuned from a particular atomic transition \(\omega_{01}\) between some ground state \(|0\rangle\) and some excited state \(|1\rangle\) in each of the atoms in the cloud (also assume for simplicity that both \(|0\rangle\) and \(|1\rangle\) are non-degenerate). In that case, it makes sense to distinguish \(n=n_0+n_1\) between the number density \(n_0\) of atoms in the ground state \(|0\rangle\) vs. the number density \(n_1\) of atoms in the excited state \(|1\rangle\) since only the atoms in the ground state \(|0\rangle\) can absorb the incident photons, after which they go into the excited state \(|1\rangle\) and so are no longer able to absorb any more photons. Thus, one might think that the correct form of the Beer-Lambert law should be:
\[\frac{dI}{I}=-n_0\sigma dz\]
But this is forgetting that atoms in the excited state \(|1\rangle\) can undergo stimulated emission too back down to the ground state \(|0\rangle\) (and in the steady state, recall from Einstein’s statistical argument that the rates of stimulated absorption and emission are equal). In contrast to absorption, this would have the effect of actually increasing the intensity \(I\) because the atom emits a photon back into the beam. Thus, the correct form of the Beer-Lambert law is actually:
where by time-reversal symmetry the optical cross-section \(\sigma\) is the same for both stimulated absorption and emission. In the steady state (i.e. when \(\dot n_1=\dot n_2=0\) reach an equilibrium), it is clear that one must also have \((n_0-n_1)\sigma I=n_1\Gamma\hbar\omega_{\text{ext}}\) where \(\Gamma=A_{10}\) is the rate of spontaneous emission/decay from the excited state \(|1\rangle\) back down to the ground state \(|0\rangle\) (note that it really is \(\hbar\omega_{\text{ext}}\) and not \(\hbar\omega_{01}\) in the formula; whatever frequency an atom absorbs must also be what it emits by energy conservation). On the other hand, also in the steady state, the optical Bloch equations assert that:
where \(s=I/I_{\text{sat}}=2(\Omega/\Gamma)^2\) is the saturation. Combining these two expressions allows one to obtain an explicit formula for how the optical cross-section \(\sigma\) depends on the “driving frequency” \(\omega_{\text{ext}}\) of the incident photons in e.g. a laser:
where there is also an \(\omega_{\text{ext}}\)-dependence hiding in the detuning \(\delta=\omega_{\text{ext}}-\omega_{01}\). At first glance, this seems to suggest that the optical cross-section \(\sigma\), in addition to depending on \(\omega_{\text{ext}}\) also depends on the intensity \(I\) of the incident photons, but actually this is an illusion, because the Rabi frequency \(\Omega\) also depends on \(I\) in such a way that the two effects cancel out so as to actually make \(\sigma\) independent of \(I\). To see this, recall that the time-average of the Poynting vector over a period \(2\pi/\omega_{\text{ext}}\) is \(I=\varepsilon_0 c|\textbf E_0|^2/2\) and that the Rabi frequency is \(\hbar\Omega=e\textbf E_0\cdot \langle 1|\textbf X|0\rangle\). The unsightly presence of the matrix element can be further removed by recalling that (in the dipole approximation) one has \(\Gamma=4\alpha\omega_{01}^3|\langle 1|\textbf X|0\rangle|^2/3c^2\). Therefore, in the best case where the incident light is polarized along the dipole moments of the atoms, then \(\Omega^2=e^2|\textbf E_0|^2|\langle 1|\textbf X|0\rangle|^2/\hbar^2\). If on the other hand the incident light were unpolarized or the atoms in the cloud were randomly oriented, then isotropic averaging would contribute an additional factor of \(1/3\):
Sticking to the best case scenario (which can be thought of as an upper bound if one likes though it is experimentally the typical situation since one often tries to maximize \(\sigma\) anyways), this leads to the explicitly \(I\)-independent form of the optical cross-section:
so the optical cross-section takes its maximum value at \(\omega_{\text{ext}}=\sqrt{\omega_{01}^2+(\Gamma/2)^2}\) but because the line width \(\Gamma\ll\omega_{01}\) is typically much less than the transition frequency itself, this is basically just \(\omega_{\text{ext}}\approx \omega_{01}\) so the maximum cross-section \(\sigma_{01}\) occurs on resonance and is given by:
This also allows one to approximate the spectrum of the optical cross-section \(\sigma\) as just a Lorentzian profile centered at \(\omega_{\text{ext}}\approx \omega_{01}\) with \(\Gamma\) being its FWHM:
Typical transition wavelengths (e.g. visible light) might be around \(\lambda_{01}\sim 10^{-7}\text{ m}\) which far exceeds the length scale \(\sim a_0\sim 10^{-11}\text{ m}\) of the individual atoms themselves. The corresponding optical cross-section \(\sigma_{01}\sim\lambda_{01}^2\) is thus much larger than the actual “size” of the atoms themselves, so this emphasizes another quantum mechanical discrepancy to the classically-minded picture where \(\sigma\) would have just been interpreted as the size of individual “hard sphere” atoms (and in that case it wouldn’t have any \(\omega_{\text{ext}}\)-dependence in the first place). Moreover, the fact that near resonance \(\sigma\) is much larger than the atoms themselves also helps to ensure laser cooling actually works since it gives each photon more “leeway” in that it doesn’t need to hit an atom “head-on” to be absorbed, but merely has to pass within the cross-section \(\sigma\).
Intensity Saturation & Broadening
At low incident intensities \(s\ll 1\), spontaneous emission dominates stimulated absorption/emission \(\Gamma\gg\Omega\) and so any atom which is excited from the ground state \(|0\rangle\) into the excited state \(|1\rangle\) will quickly decay back down to the ground state \(|0\rangle\) by spontaneous emission. However, as one ramps up the laser intensity to saturation \(s\to 1\) and even \(s>1\), although there is a cap \(\rho_{11}<1/2\) on the excited state population, nevertheless the ground state population \(\rho_{00}\to 1/2\) will have depleted so much that there won’t be that many atoms left to absorb any more incident photons, so one would expect the sample to get worse and worse at absorbing incident photons. Recalling that \(s=I/I_{\text{sat}}=2(\Omega/\Gamma)^2\) (note that in the optimal case \(I_{\text{sat}}=\hbar\omega_{01}^3\Gamma/12\pi c^2\) but importantly is an intrinsic property of the atomic transition that scales with the transition frequency as \(I_{\text{sat}}\propto\omega_{01}^6\) due to the extra factor of \(\omega_{01}^3\) in \(\Gamma\)), it is clear that when \(s\to 1\), the Rabi frequency \(\Omega\) grows to the point of being comparable with the spontaneous decay rate \(\Gamma\), so now stimulated emission starts competing with spontaneous emission. In order to see this mathematically, it is useful to look at the absorption coefficient whose reciprocal directly governs the length scale of attenuation in the Beer-Lambert law:
This is just another Lorentzian similar to the cross-section \(\sigma(\omega_{\text{ext}})\) itself. But there’s a crucial difference; whereas the FWHM of the Lorentzian for \(\sigma\) was fixed at \(\Gamma\), here it is \(\Gamma\sqrt{1+s}\); but this is now dependent on the laser intensity \(s\), causing the Lorentzian to broaden as \(s\) increases (this is exactly the same kind of broadening seen in \(\rho_{11}\); one difference though is that while \(\rho_{11}\to 1/2\) saturates, here the resonant absorption coefficient \(n\sigma_{01}/(1+s)\) just decreases monotonically as \(s\) is ramped up).
Finally, one can revisit the original Beer-Lambert law \(I(z)=I_0e^{-n\sigma z}\) and ask what becomes of it after all the modifications; from the expression for the absorption coefficient above, one has:
where \(I_0:=I(z=0)\) is the incident irradiance. The quantity \(\ln I_0/I\) is often called the optical density (OD) in AMO physics, or the absorbance in chemistry. In practice, this formula cannot just be used as is, but rather requires calibrating for the polarization, detuning fluctuations, optical pumping losses, etc. by sweeping over a range of incident intensities \(I_0\) and, using some known atom number \(N=n_cA\) obtained by other methods, choosing \(I_{\text{sat}}\) so that \(n_c\sigma_{01}\) is approximately invariant for all \(I_0\) and corresponding \(I\).
The alkali atom isotope \(^{39}\text K\) has fixed, non-negotiable electron spin \(s=1/2\) and nuclear spin \(i=3/2\); hence it is bosonic \(s+i=2\). Within the gross \(n\)-manifold for \(n=4\), consider either the \(4s_{1/2}\) or \(4p_{1/2}\) fine \(j\)-manifolds for \(j=1/2\). In both cases, there are two hyperfine \(f\)-manifolds corresponding to total atomic angular momenta \(f=1,2\). In the strict absence \(\textbf B=\textbf 0\) of an external magnetic field, the \(f=1\) hyperfine manifold has \(3\) degenerate \(m_f\)-sublevels corresponding to projections \(m_f=-1,0,1\) of the total atomic angular momentum along some arbitrary \(z\)-axis, while the \(f=2\) hyperfine manifold has \(5\) degenerate \(m_f\)-sublevels corresponding to \(m_f=-2,-1,0,1,2\). However, upon turning \(\textbf B\neq\textbf 0\) on with \(B:=|\textbf B|\), the Breit-Rabi formula asserts that the \(2f+1\)-fold degeneracy among the Zeeman sublevels within each hyperfine \(f\)-manifold is lifted exactly according to the trajectories:
where \(g_j=2\) for \(4s_{1/2}\) and \(g_j=2/3\) for \(4p_{1/2}\), and \(A\approx h\times 230.859860\text{ MHz}\) for \(4s_{1/2}\) whereas \(A\approx h\times 27.793\text{ MHz}\) for \(4p_{1/2}\) (see the data for \(^{39}\text K\) here).
Intuitively, the reason why the \(m_f\) sublevels seem to be inverted in the \(4s_{1/2}\), \(f=1\) hyperfine manifold is that \(g_f=-1/2<0\) is negative and to first-order the Zeeman perturbation is \(g_fm_f\mu_BB\) (originally \(-\boldsymbol{\mu}\cdot\textbf B\) but \(q=-e\)).
2D scan of \(I_{\sigma^+}+I_{\sigma^-}\) vs. \(I_{\sigma^+}/I_{\sigma^-}\), in an ideal world the measured OD should be constant across the entire space (as measured at low-field), but
AOM driver right now is just being controlled by essentially varying a potentiometer \(R_2\) which controls the voltage at the midpoint of a voltage divider, which is fed into a voltage oscillator circuit that effectively maps \(V\to\omega_{\text{ext}}\) to RF-drive the AOM with. By flicking the switch, voltage divider circuit is no longer controlling it, instead it’s externally controlled by a computer in the Cicero Word Generator GUI for AMO physics experiments.
The natural line width of optical/visible light (THz) transitions is practically zero compared with the RF transition (on the order of 400 MHz) between potassium-39 hyperfine states because \(\Gamma\propto\omega_{01}^3\).
Need to first lock onto the right B-field (395 G) by doing a frequency sweep. Then, once that is locked onto, need to then impose correct frequency shifts on the AOMs (have a substantial line width/leeway here like 6 MHz or something?), will require a second frequency sweep to find max SNR) centered around roughly where we expect it to be located anyways (show calculation for this).
The idea is that one would like to do spin-resolved polaron injection spectroscopy.
D\(1\) repump light is from \(4s_{1/2}\) manifold (typically use \(|1,1\rangle\) for its broad Feshbach resonance) to \(4p_{1/2}\) manifold \(|2,2\rangle\). D\(2\) imaging light is from \(4s_{1/2}\) to \(4p_{3/2}\) stretched state \(|3,3\rangle\).
The D\(2\) laser light is first incident on a \(\lambda/2\) waveplate which rotates the polarizations so that some go into each arm of a double-pass AOM. It is first passed into an AOM double-pass setup to get \(\pm 220\text{ MHz}\). These then are incident on a D\(2\) flip mirror which redirects this D\(2\) light into the modular optical breadboard setup we built. Specifically, the crossed polarizations are incident on a \(\lambda/2\) waveplate that rotates a certain amount of polarization into each of two double-pass AOM arms. One branch is additive by \(220\text{ MHz}\) in total (after double-pass) while the other branch is subtractive \(-220\text{ MHz}\), so when aligning it is essential to maximize the correct order \(m=\pm 1\), and to check this by turning on the TTL switch of the driver to see which order is left just before the iris. These then need to be overlapped onto an output fiber, with another \(\lambda/2\) waveplate onto a PBS which will throw away \(P/2\) but at the benefit of having a single polarization propagating through your polarization-maintaining fiber and directly into the science cell. This waveplate also allows optimizing \(I_{\sigma^+}/I_{\sigma^-}\).
The AOM drivers are controlled by a digital channel for using Cicero to do TTL switching and also an analog channel for using Cicero to change driving amplitude of the AOMs (Janet for the \(-220\text{ MHz}\) and Billy for \(+220\text{ MHz}\)). In Cicero, the Override option for the D\(2\) flip mirror needs to be checked, but value is off for it to be down. Also, when overriding a digital channel, it is automatic, but when overriding an analog channel, need to specifically say so.
If one wishes to abort a given sequence, best to tick the box, and when the sequence is finished (usually around \(30\) seconds). to quickly close it and click “restart sequence” to start up a new sequence or something (to keep coils heated).
There are quadrupole coils (seem like 4 pairs?) in an anti-Helmholtz configuration for the MOT, Feshbach coils for the broad \(|1,1\rangle\) Feshbach resonance field to tune \(a\) (there are some empirical correlations in Cicero between the applied voltage in the coils and the corresponding \(B\to a\) you get out of it).
Optical dipole trap (ODT), the light for that is the dangerous IR (power is 1 Watt, can even burn your skin).
“Walking the beam” (draw schematic) by turning say \(\phi_1\) and seeing which direction \(\phi_2\) needs to go to keep at the same voltage, doing same for \(\theta_1,\theta_2\)…adjusting collimation at the end.
Fiber pen, fiber cleaning kit (microscope, never look into it if the other end of fiber is coupled to light or will go blind).
To actually make the optical box trap of green light, shine light onto a spatial light modulator (SLM, which is a bunch of liquid crystals applying some phase and stuff, a Freedericksz transition?, a bit like DMD except rotates slower so response time is kinda ass). Box is not a perfect cylinder, it is more like the waist of a Gaussian beam (length of \(40\) microns or so is Rayleigh distance \(z_R\)), and the sides are given by steep power law potentials. A bunch of lenses of various \(f\) act like Fourier transformers, etc. so that light field at focal plane is Fraunhofer pattern of SLM grating.
When locking onto say the D\(2\) laser, have an absorption cell of solid \(\text K(s)\) with melting point around \(40^{\circ}\text{ C}\). Doppler-free spectroscopy allows measuring . derivative is physically measured, two PID controllers for different time scales used to
Consider an non-interacting gas of identical fermions (e.g. electrons \(e^-\)); this is called an ideal Fermi gas. Because the Pauli exclusion principle prohibits identical fermions from occupying the same quantum state, the grand canonical partition function \(\mathcal Z\) for an ideal Fermi gas is just:
It is remarkable that a mere sign change in the denominator from the Bose-Einstein distribution is all that is needed to enforce the Pauli exclusion principle. Unlike for the ideal Bose gas where the chemical potential \(\mu<0\) had to be negative, for the Fermi-Dirac distribution \(\mu\in\textbf R\) can be anything.
Just as with the ideal Bose gas, for an ideal Fermi gas one would like to approximate the series with integrals (called the Thomas-Fermi approximation) \(\sum_{|k\rangle\in\mathcal H_0}\mapsto\int_0^{\infty}g(E)dE\). Taking the ideal Fermi gas to be non-relativistic, one has the density of states:
where \(g_s=2s+1\) is a spin degeneracy factor (which has to be explicitly included for fermions by virtue of the spin-statistics theorem \(s=1/2,3/2,5/2,…\) and the fact that the free Hamiltonian \(H=T\) commutes with \(\textbf S^2\)). In the grand canonical ensemble, one thus has for an ideal Fermi gas:
from which one obtains \(pV=\frac{2}{3}E\) for an ideal Fermi gas as was the case for the ideal Bose gas (and the ideal classical gas). In the high-temperature \(T\to\infty\) limit \(z\to 0\), one finds that, similar to the ideal Bose gas, the ideal Fermi gas looks like an ideal classical gas, at least to first order in the virial expansion (at second order, the quantum correction actually increases the pressure of the ideal Fermi gas whereas it was decreasing for the ideal Bose gas):
In order to see more interesting, non-classical physics, it will as usual be necessary to look in the low-temperature limit \(T\to 0,z\to 1\). In fact, to start, one may as well look directly at the case of absolute zero \(T=0\). In this case, the ideal Fermi gas is said to be degenerate. At a glance, this is because the Fermi-Dirac distribution for the Fermi occupation numbers reduces to a top-hat filter:
\[N_k=\frac{1}{e^{\beta(E_k-\mu)}+1}=[E_k<\mu]\]
One can define the Fermi energy by \(E_F:=\mu(T=0)\) so that states \(|k\rangle\) with \(\hbar^2k^2/2m<E_F\) lying in the Fermi sea are fully occupied (i.e. have Fermi occupation number of \(N_k=1\)) while states \(|k\rangle\) with \(\hbar^2k^2/2m>E_F\) lying beyond the Fermi surface are completely empty. This definition of the Fermi energy \(E_F\) is strictly speaking a bit misleading since in the grand canonical ensemble \(\mu\) and \(T\) are independent and fixed while \(N\) fluctuates; in practice \(N\) is fixed and both \(\mu\) and \(T\) fluctuate in a way to keep \(N\) fixed so that working in the grand canonical ensemble is just a mathematical convenience. Therefore, it would make more sense to express/define \(E_F\) in terms of the fixed number \(N\) of fermions in the degenerate ideal Fermi gas:
This is of course related to the Fermi momentum and Fermi temperature by \(E_F=\hbar^2k_F^2/2m=kT_F\). The Fermi temperature \(T_F\) for the ideal Fermi gas determines whether the ideal Fermi gas is in the high-temperature \(T>T_F\) regime or the low-temperature \(T<T_F\) regime. For example, in a copper \(\text{Cu(s)}\) wire the number density of electrons \(e^-\) is \(N/V\approx 8.5\times 10^{28}\text{ m}^{-3}\), so the corresponding Fermi temperature is actually quite hot \(T_F\approx 8.2\times 10^4\text{ K}\) by everyday standards, and so in particular room temperature \(T\approx 300\text{ K}\ll T_F\) means that the electrons \(e^-\) in metals can be thought of to a good approximation as degenerate \(T=0\) Fermi gases.
Having computed the total number of fermions \(N=\langle N\rangle\), one can also compute the total energy \(E=\langle E\rangle\) in the grand canonical ensemble:
which is pretty intuitive, the factor of \(3/5\) essentially just coming from the average of \(k^2\) in a ball of radius \(k_F\), i.e. \(\frac{3}{4\pi k_F^3}\int_0^{k_F}k^24\pi k^2dk=\frac{3}{5}k_F^2\).
Finally, the “equation of state” \(pV=\frac{2}{3}E\) earlier yields the corresponding degeneracy pressure:
\[pV=\frac{2}{5}NE_F\]
For comparison, recall that below the critical temperature \(T<T_c\) the pressure \(p\sim T^{5/2}\) of a BEC approached \(p\to 0\) as \(T\to 0\); not so for an ideal Fermi gas. For both the ideal Bose and Fermi gases, \(pV=\frac{2}{3}E\) but because bosons can condense to the \(E=0\) ground state, their pressure \(p\) also drops to \(p\to 0\), however fermions cannot do this because of the Pauli exclusion principle (they are forced to fill out a Fermi sea instead), so their total energy \(E=\frac{3}{5}NE_F\) can never reach zero, and therefore their pressure \(p\) also cannot reach \(p\to 0\), leaving this residual \(T=0\) degeneracy pressure \(p=\frac{2}{5}\frac{N}{V}E_F>0\).
Finally, it is worth asking more generally just about the physics of an ideal Fermi gas not necessarily when it is degenerate at \(T=0\), but merely at some “low” temperature \(T\ll T_F\). Here, “physics” shall mean “low-temperature heat capacity” \(C_V=C_V(T)\).
In this case, the Fermi-Dirac distribution will be distorted from the degenerate \(T=0\) top-hat filter into a distribution that looks like:
The key observation is that only fermions close to the Fermi surface, specifically whose energy is within \(kT\) of the Fermi energy \(E_F\) can respond to any additional energy added to the ideal Fermi gas, and therefore contribute to the heat capacity \(C_V\) (since only they notice the non-degenerate temperature \(T>0\), the rest of the fermions being locked in the Fermi sea by the Pauli exclusion principle).
At this point, invoke the behavior of the polylogarithm as the fugacity \(z\to 1\) in the low-\(T\) limit (called the Sommerfeld expansion, essentially just a lot of binomial expansions):
Problem: Explain why, in order for the number of fermions \(N\) in the gas to be fixed, in particular \(dN/dT=0\), the chemical potential \(\mu\) must become a function of temperature \(\mu=\mu(T)\).
So if the LHS is a constant, but the RHS has an explicit \(T\)-dependence in the \(\beta=1/k_BT\), so \(\mu(T)\) must vary implicitly so as to “offset” the explicit \(T\) variation in \(\beta\) to keep the overall integral constant.
Problem: At \(T=0\), the value of the chemical potential is by definition called the Fermi energy \(E_F:=\mu(T=0)\) of the Fermi gas, i.e. roughly speaking each additional fermion added to the gas increases the gas’s energy by \(E_F\) since that fermion would be added to the Fermi surface. State how \(E_F\) scales with the number density \(N/V\) of fermions in the gas in dimension \(d\).
Solution: The key point to realize is that at \(T=0\) the Fermi-Dirac distribution becomes a step function \([E<E_F]\), so one has the implicit equation for \(E_F\):
So in particular, the important point to remember is that \(E_F\sim (N/V)^{2/d}\).
Problem: Now suppose, instead of working with a strictly degenerate \(T=0\) Fermi gas, one heats the gas up a little to some strictly positive temperature \(T>0\), but still much less than the gas’s Fermi temperature \(T_F:=E_F/k_B\). In this low-\(T\)regime, use the Sommerfeld expansion to show that the chemical potential \(\mu(T)\) decreases (provided \(d\geq 3\)) quadratically from its \(T=0\) value of \(\mu(T=0)=E_F\), in particular \(\partial\mu/\partial T|_{T=0}=0\) so to \(1^{\text{st}}\)-order provided \(T\ll T_F\) one can often get away with approximating the chemical potential \(\mu\approx E_F\) by its constant value at \(T=0\).
Solution:
A comment: recall that the Bose-Einstein distribution \(\frac{1}{e^{\beta(E-\mu)}-1}\) comes from summing a suitable geometric series in the partition function. The idea of the Sommerfeld expansion is kinda to undo this step, recasting distribution back into its geometric series form…except the catch here is that one is working with the Fermi-Dirac distribution, not the Bose-Einstein, and indeed in the derivation of the Fermi-Dirac distribution there was no geometric series involved (or a trivial geometric series of just \(2\) terms if one likes), yet the way the sum is being unwrapped is more in the spirit of Bose-Einstein statistics…is there any connection here or just a mere mathematical coincidence?
Finally, it is clear that one can re-express the heat capacity in terms of \(N\) and \(E_F\) (the fixed variables) as:
leading to the linear heat capacity behavior of the low-\(T\) ideal Fermi gas:
\[C_V=\frac{\pi^2}{2}Nk\frac{T}{T_F}\]
Ignoring the \(\pi^2/2\) prefactor which came from the detailed Sommerfeld expansion of the polylogarithms, there is a simple intuitive way to understand this formula: the number of Fermi surface fermions living within \(kT\) of the Fermi energy \(E_F\) is \(g(E_F)kT\) and the energy of each fermion is of order \(kT\) so the total energy of all Fermi surface fermions is \(E\sim g(E_F)(kT)^2\). If one adds some energy \(dE\) into the ideal Fermi gas, then essentially all this energy has to go into the Fermi surface fermions so that one may legitimately equate \(dE\sim g(E_F)k^2TdT\) reproducing the linear heat capacity:
Actually, even the pre-factor \(\pi^2/2\) from the Sommerfeld expansion can almost be calculated correctly. Since \(N=\int_0^{E_F}dEg(E)\) and the integral of a power \(g(E)\sim E^{1/2}\) is just \(N=\frac{2}{3}E_Fg(E_F)\), and since this is basically a free electron gas (minus Pauli as usual), any injection of energy goes directly into Fermi surface electrons. There are \(g(E_F)k_BT\) of these states, which, assuming they’re all completely filled, also means there are that many electrons, each with the equipartition kinetic energy \(3k_BT/2\):
\[dE=g(E_F)k_BT\times\frac{3}{2}k_BT\]
so \(C_V=\partial _TE=\frac{9}{2}Nk_B\frac{T}{T_F}\).
A more visually intuitive way to understand how \(\mu\) depends on \(T\):
The theory of ideal Fermi gases has diverse applications, ranging from electrons \(e^-\) in a conductor (as justified by Landau’s Fermi liquid theory) to astrophysics (e.g. white dwarf stars are supported by electron degeneracy pressure, neutron stars are supported by neutron degeneracy pressure, thanks to the fact that both electrons \(e^-\) and neutrons \(n^0\) are fermions) to Pauli paramagnetism and Landau diamagnetism in condensed matter physics.
The purpose of this post is to prove several general identities concerning the quantum statistical mechanics of an isolated, ideal Bose gas at equilibrium.
Problem #\(1\): Specify the physics (i.e. write down the Hamiltonian \(H\) for an isolated, ideal Bose gas).
Solution #\(1\): Because the Bose gas is isolated, there is no external potential \(V_{\text{ext}}=0\) and because it is ideal, there are no internal interactions \(V_{\text{int}}=0\). This leaves only the relativistic kinetic energy, and the single-bosonHamiltonian \(H\) is given by the usual dispersion relation:
\[H=\sqrt{\textbf P^2c^2+m^2c^4}-mc^2\]
However, for the typical case of nonrelativistic \(|\textbf P|\ll mc\) massive bosons, \(H\approx|\textbf P|^2/2m\), and for the less typical but still important case of (necessarily relativistic) massless \(m=0\) bosons (e.g. photons), \(H=|\textbf P|c\).
Problem #\(2\): By carefully considering what it means to be a boson, explain why one should work in the grand canonical ensemble.
Solution #\(2\): If \(\mathcal H\) denotes a single-boson state space, then the state space of \(N\) identical bosons is the \(N\)-fold symmetric tensor product \(S^N(\mathcal H)\subseteq\mathcal H^{\otimes N}\), i.e. two states \(|\Psi\rangle,|\Psi’\rangle\in S^N(\mathcal H)\) are physically equivalent \(|\Psi’\rangle\equiv|\Psi\rangle\) iff both specify exactly the same number of bosons \(N_{|\textbf k\rangle}\in\textbf N\) in each single-boson state \(|\textbf k\rangle\in\mathcal H\).
This means that, although experimentally \(N\) is typically fixed (massless \(m=0\) bosons like photons being the exception), mathematically any sum of the form \(\sum_{|\Psi\rangle\in S^N(\mathcal H)}\) for fixed \(N\) (which would arise when computing the partition function in the microcanonical or canonical ensembles) is a non-trivial combinatorics problem to parameterize. Thus, solely motivated by ease of mathematical calculation, one should allow the total number of bosons \(N\) in the Bose gas to fluctuate in diffusive equilibrium with an external particle bath \(\mu\), hence working in the grand canonical ensemble.
Problem #\(3\): Write down an expression for the grand canonical potential \(\Phi\), stating any assumptions.
Solution #\(3\): As usual, \(\Phi=-\beta^{-1}\ln\mathcal Z\), where the grand canonical partition function is:
where, from Solution #\(1\), the energy of the single-boson plane wave state is \(E_{|\textbf k\rangle}=\sqrt{\hbar^2|\textbf k|^2c^2+m^2c^4}-mc^2\). The geometric series converges for all \(|\textbf k\rangle\in\mathcal H\) iff \(\mu<E_{|\textbf k\rangle}\) for all \(|\textbf k\rangle\in\mathcal H\); since the kinetic energy is positive semi-definite (reaching its global minimum \(E_{|\textbf 0\rangle}=0\) for the \(\textbf k=\textbf 0\) ground state), this in turn is logically equivalent to the condition of a strictly negative chemical potential \(\mu<0\) (how to explain that this condition is violated for massless bosons and also for Bose-Einstein condensates, both of which have \(\mu=0\)? edit: one way I just thought of approaching this is to take Zoran’s perspective about the \(c\) subscript being placed on \(N_c\) rather than the experimentally more pertinent \(T_c\), i.e. recall the argument was that for a fixed \(T\), one can find an \(N_c\) such that if \(N=N_c\), then \(T=T_c\)…but then recalling \(T_c\propto N^{2/3}\) in a box trap or \(T_c\propto N^{1/3}\) in a harmonic trap, hence monotonically increasing functions of \(N\), so for \(N>N_c\), occupation of ground state is \(N-N_c\) (so in principle can get BEC at room temperature \(T\) just will need to surpass a ridiculous \(N_c\)). Hence:
Problem #\(5\): Write down the relativistic density of states \(g(k)\). Hence compute \(g(E)\). What does it reduce to in the non-relativistic limit \(E\ll mc^2\) and in the massless \(m=0\) limit?
Solution #\(5\): Assuming infinite space periodic boundary conditions:
\[g(k)=\frac{\sigma V}{(2\pi)^3}4\pi k^2\]
where \(\sigma\) is some additional factor accounting for degrees of freedom besides \(\textbf k\) (e.g. \(\sigma=2\) for the polarization qubit of a photon). So:
Problem #\(6\): Hence, using Solution #\(5\), estimate the excited state population \(N^*\) and corresponding excited kinetic energy \(E^*\). Explain why the ground state is not accounted for.
where \(z:=e^{\beta\mu}\in (0, 1)\) is called the fugacity of the ideal Bose gas, \(\lambda=\sqrt{\frac{2\pi\hbar^2}{mkT}}\) is the thermal de Broglie wavelength of the ideal Bose gas, and the polylogarithm is defined by the series \(\text{Li}_s(z):=\sum_{n=1}^{\infty}\frac{z^n}{n^s}\), so for instance \(\text{Li}_s(1)=\zeta(s)\) and \(\int_0^{\infty}\frac{x^{s-1}}{z^{-1}e^x-1}dx=\Gamma(s)\text{Li}_s(z)\).
Recalling that \(\Phi=-pV\) in the grand canonical ensemble, one therefore obtains (in an indirect form) the equation of state for an ideal Bose gas:
\[pV=\frac{2}{3}\langle E\rangle\]
Or, working in the thermodynamic limit henceforth:
\[pV=\frac{2}{3}E\]
At first, this looks exactly the same as the equation of state \(pV=NkT\) for just a classical (non-bosonic) ideal gas since there the kinetic energy is \(E=\frac{3}{2}NkT\). To see how in fact the equation of state for the ideal Bose gas is not the same as that of the ideal classical gas (i.e. that \(E\neq\frac{3}{2}NkT\) for the ideal Bose gas), clearly one must find how \(E=E(N,T)\) is related to \(N\) and \(T\). Conceptually this is straightforward since above one has already computed \(N=N(\mu, T)\) and \(E=E(\mu, T)\), so one just has to first invert \(N=N(\mu,T)\) in the form \(\mu=\mu(N,T)\) and then substitute this into \(E=E(\mu,T)=E(\mu(N,T),T)=E(N,T)\) which can be plugged into the equation of state to obtain \(pV=\frac{2}{3}E(N,T)\). Practically, there is no simple analytical way to do this for arbitrary temperatures \(T\) and chemical potential \(\mu\).
Instead, the next best thing one can hope for is to get a sense of the physics at the two extremes of the fugacity \(z\in(0,1)\), namely the high-temperature limit \(z\to 0\) and the low-temperature limit \(z\to 1\) (it is a priori counterintuitive that \(z\to 0\) is a high-\(T\) expansion or that \(z\to 1\) is a low-\(T\) expansion considering \(\mu<0\); this only becomes apparent a posteriori). In the high-\(T\) case \(z\to 0\), some algebra gives the second-order virial expansion for the high-temperature equation of state of an ideal Bose gas:
Thus, compared to an ideal gas at the same temperature \(T\), the effect of bosonic statistics is to reduce the pressure \(p\) a little bit, but otherwise, at high temperatures \(T\to\infty\), the ideal Bose gas and ideal classical gas are basically the same.
The more interesting physics lurks in the low-\(T\) limit \(z\to 1\). In this case, it is clear that the number of bosons in the ideal Bose gas approaches:
However, suppose one were to cool below the critical temperature \(T<T_c\). Supposing that \(z\) remains capped at \(1\) (otherwise the polylogarithm \(\text{Li}_{3/2}(z)\) would diverge for \(z>1\) as the graph suggests), then because \(\lambda\propto T^{-1/2}\), this implies that the number of bosons \(N\) should decrease, but that is absurd because \(V\) is constant and the number of bosons \(N\) does not fluctuate enough in the thermodynamic limit \(\sigma_N/N\sim N^{-1/2}\) to explain this decrease. The resolution to this paradox is subtle and cuts to the heart of Bose-Einstein condensation. Recall from the Bose-Einstein distribution that the Bose occupation number \(N_0=\langle N_0\rangle\) of the single-boson ground state \(|0\rangle\) (whose energy is \(E_0=0\)) is:
\[N_0=\frac{1}{z^{-1}-1}\]
So more and more bosons in the ideal Bose gas will, in the low-temperature limit \(z\to 1\), condense into the ground state \(|0\rangle\) as evidenced by the blowup of \(N_0\) at \(z=1\). However, earlier the replacement \(\sum_{|k\rangle\in\mathcal H_0}\mapsto\int_0^{\infty}g(E)dE\) when evaluating the average number of bosons \(\langle N\rangle\) would have (in this \(z\to 1\) edge case) undercounted all of these bosons condensing in the ground state \(|0\rangle\) because the density of states \(g(E)\propto\sqrt{E}\) vanishes \(g(0)=0\) at the ground state energy \(E_0=0\). Instead, one can just manually add \(N_0=(z^{-1}-1)^{-1}\) into the total boson count “by hand” to obtain the revised count:
The way to read this is that the first term \(\frac{V}{\lambda^3}\text{Li}_{3/2}(z)\) is the total number of bosons not in the ground state, which as \(z\to 1\) should become negligible in comparison to the Bose occupation number \(N_0\) of the ground state. In this limit, one has:
So the fugacity \(z\) is naturally capped by however many bosons \(N\) one started with, regardless of how low the temperature \(T\to 0\) drops. More precisely, as \(T<T_c\) drops below the critical temperature, the fraction \(N_0/N\) of bosons in the ground state \(|0\rangle\) grows monotonically towards \(N_0/N\to 1\) as \(T/T_c\to 0\) in the manner:
This low-temperature bosonic communism is called Bose-Einstein condensation.
As for the low-temperature \(T<T_c\) equation of state for the ideal Bose gas, one has the previous grand canonical potential but with an additional contribution from the BEC in the ground state:
However, this makes it clear that, unlike for the total boson number \(N\), here the ground state contribution to the grand canonical potential \(\Phi\) is actually negligible in the thermodynamic limit because \(V/\lambda^3\sim N\) is much larger than \(\ln(N)\). The low-\(T\) equation of state for the ideal Bose gas (well actually now the BEC) is therefore:
Clearly, this is now very different from the classical ideal gas. Notably, the pressure \(p\) is independent of the bosonic number density \(N/V\), the intuition being that the vast number of bosons condensing in the motional ground state \(|k\rangle=|0\rangle\) will be (roughly) frozen in place and therefore make negligible contribution to the pressure \(p\).
For now, there is one more interesting thing to mention about Bose-Einstein condensation; clearly it’s a phase transition between two radically different states of matter, namely from a(n ideal Bose) gas to a BEC (cf. gaseous steam \(\text H_2\text O(g)\) condensing into liquid water \(\text H_2\text O(\ell)\)). Usually phase transitions are associated with some kind of discontinuity in physical properties at the phase transition; how does this manifest in the case of Bose-Einstein condensation at the critical temperature \(T=T_c\)? It turns out that the derivative \(\frac{\partial C_V}{\partial T}\) of the isochoric heat capacity \(C_V\) with respect to the temperature \(T\) is discontinuous at the critical temperature \(T=T_c\) (although the heat capacity \(C_V=C_V(T)\) itself is continuous). This is reminiscent (and related to) the superfluid \(\lambda\)-transition seen in bosonic \(^4\text{He}\) at \(T\approx 2.17\text{ K}\).
Problem: Consider an isolated atom with time-independent Hamiltonian \(H_0\). Such an atom will have many bound \(H_0\)-eigenstates, but for simplicity focus on just two such bound states (think of it as a qubit) \(|0\rangle\) and \(|1\rangle\) (called the ground state and the excited state) separated by a resonant frequency \(\omega_0=(E_1-E_0)/\hbar\). If one now proceeds to shine light on the atom of frequency \(\omega\), show that the corresponding interaction potential \(V_{/H_0}(t)\) in the interaction picture modulo \(H_0\) is given approximately by:
where the detuning \(\delta:=\omega-\omega_0\), and state all assumptions.
Solution: The assumptions are:
(Semiclassical approximation) The atom is treated “quantumly” but the light is treated classically.
(\(\alpha\times\)Stark \(=\) Zeeman) Within the semiclassical approximation, the effect of the \(\textbf B\)-field is ignored compared to the \(\textbf E\)-field.
(Dipole approximation) The electric field is approximately spatially independent \(\textbf E(t)=\textbf E_0\cos(\omega t)\) (by evaluating it at the atom’s position).
(Two-level system) Assume no other \(H_0\)-eigenstates are relevant, i.e. \(|0\rangle\langle 0|+|1\rangle\langle 1|\approx 1\).
(\([H_0,\Pi]=0\)) The ground state \(|0\rangle\) and the excited state \(|1\rangle\) are both \(\Pi\)-eigenstates with opposite parity eigenvalues \(\pm 1\).
(Rotating wave approximation) The detuning is small, i.e. \(|\delta|\ll\omega+\omega_0\) (notice this is compatible with assumption #\(4\)).
(Monochromatic) The light is assumed to be of a single frequency \(\omega\) with zero spectral width.
Then the interaction potential in the Schrodinger picture is:
where the matrix elements are in the obvious Hilbert space \(\text{span}_{\textbf C}|0\rangle,|1\rangle\) and the diagonal entries vanish by the parity assumption. Here, \(\Omega\) is the Rabi frequency and defined to capture those non-vanishing off-diagonal matrix elements (both gauge-fixed to be real):
(intuition: \(\Omega\) simultaneously contains information about how “bright” the light is and also how strongly this particular E\(1\) perturbation couples \(|0\rangle\) and \(|1\rangle\)). In the interaction picture modulo \(H_0\):
where one has written \(\cos \omega t=(e^{i\omega t}+e^{-i\omega t})/2\) to make the “lock-in detection” explicit and then low-pass filtered with RWA. In particular, seeing the factor of \(1/2\) in front is a smoking gun for RWA. This then matches the claimed result with \(\sigma_+:=|1\rangle\langle 0|=\begin{pmatrix}0&0\\1&0\end{pmatrix}\) and \(\sigma_{\pm}^{\dagger}=\sigma_{\mp}\).
Problem: Having found \(V_{/H_0}(t)\), show that \(|\psi_{/H_0}(t)\rangle\) undergoes Rabi oscillations at the generalized Rabi frequency \(\tilde{\Omega}:=\sqrt{\Omega^2+\delta^2}\).
If one expands out the interaction picture state \(|\psi_I(t)\rangle=\langle 0|\psi_I(t)\rangle|0\rangle+\langle 1|\psi_I(t)\rangle|1\rangle\) in the subspace as well, then one obtains a non-autonomous linear dynamical system with \(2\pi/\delta\)-periodic Floquet forcing:
Nevertheless, it turns out to be very easy in this case to decouple the time evolutions of the projections \(\langle 0|\psi_I\rangle\) and \(\langle 1|\psi_I\rangle\) from each other into two undriven, damped harmonic oscillators (even though ironically the atom is being driven with light at \(\omega_{\text{ext}}=\omega_{01}+\delta\)):
Assuming the initial condition \(|\psi_I(0)\rangle=|0\rangle\) that the atom starts at time \(t=0\) in the ground state \(|0\rangle\), the solutions are:
where \(\tilde{\Omega}:=\sqrt{\Omega^2+\delta^2}\) is called the generalized Rabi frequency. Being a harmonic oscillator, it makes sense that the atom’s interaction picture state \(|\psi_I(t)\rangle\) roughly speaking “oscillates” between the ground state \(|0\rangle\) and the excited state \(|1\rangle\), called Rabi oscillations, but these oscillations are not actually damped because the “damping coefficient” \(\pm i\delta\) was imaginary. Note that although all of the above discussion has focused on Rabi oscillations in the context of electric dipole transitions, there are also times when the magnetic field \(\textbf B_{\text{ext}}\) rather than the electric field \(\textbf E_{\text{ext}}\) dominates the physics (e.g. fine structure or hyperfine structure transitions) in which case there would also be Rabi oscillations in the context of magnetic dipole transitions.
Rabi oscillations are more intuitive when expressed in terms of the probabilities prescribed by the Born rule. In this case, one has (dropping the \(I\)-subscript because it no longer matters):
where remember that \(\sin^2\frac{\tilde{\Omega}}{2}t=\frac{1}{2}(1-\cos\tilde{\Omega}t)\) oscillates at the generalized Rabi frequency \(\tilde{\Omega}\) and not \(\tilde{\Omega}/2\). In particular, these Rabi “probability oscillations” are most pronounced when the light is resonant with the atom, i.e. \(\delta=0\). In this case, \(\tilde{\Omega}=\Omega\) and one has:
Such \(\delta=0\) resonant Rabi oscillations also provide a way to experimentally prepare various qubit states in the lab simply by controlling the driving time \(\Omega t\) for which one applies the light. For instance, if one applies a \(\pi\)-pulse so that \(\Omega t=\pi\), then in theory one is guaranteed to excite the atom \(|0\rangle\mapsto -i|1\rangle\equiv|1\rangle\). Alternatively, if one applies an \(\Omega t=\pi/2\)-pulse, then this yields the “circularly polarized” state \(|0\rangle\mapsto (|0\rangle-i|1\rangle)/\sqrt{2}\).
More generally when the detuning \(\delta\neq 0\) is off-resonance, the maximum probability of an electric dipole transition \(|0\rangle\to|1\rangle\) from the ground state to the excited state that one can achieve is \(\Omega^2/\tilde{\Omega}^2\), although in the limit as \(\Omega\to\infty\) (e.g. cranking up the laser), this ratio does approach \(1\).
As with any qubit system, one can visualize the dynamics of Rabi oscillations on the Bloch sphere. Although here the ket \(|\psi_I(t)\rangle\) is tautologically a pure state, one can nevertheless work with its interaction picture density operator \(\rho_I(t)=|\psi_I(t)\rangle\langle\psi_I(t)|\). However, despite one’s first instinct being to work with \(\rho_I\) in the \(H\)-eigenbasis of the ground state \(|0\rangle\) and excited state \(|1\rangle\), it turns out to be more convenient to first boost unitarily into a “steady-state picture”. Specifically, if one instead works with the “steady-state basis” \(|\tilde 0\rangle:=e^{i\delta t/2}|0\rangle\) and similarly \(|\tilde 1\rangle:=e^{-i\delta t/2}|1\rangle\), along with the bras \(\langle\tilde 0|=e^{-i\delta t/2}\langle 0|\) and \(\langle\tilde 1|=e^{i\delta t/2}\langle 1|\), then starting from the earlier Schrodinger equation in the rotating wave approximation:
is now time-independent at the expense of gaining back the diagonal matrix elements, where the generalized Rabi vector is given by \(\tilde{\boldsymbol{\Omega}}:=(\Omega,0,\delta)\) and has magnitude equal to the generalized Rabi frequency \(|\tilde{\boldsymbol{\Omega}}|=\sqrt{\Omega^2+\delta^2}\).
Problem: Show that this result can also be obtained by transforming to an alternative picture (rather than the standard interaction picture mod \(H_0\)):
Split \(H_0=E_0|0\rangle\langle 0|+E_1|1\rangle\langle 1|\) into symmetric and antisymmetric parts:
And since the symmetric part is isotropic, one can safely discard it and keep only the antisymmetric part.
2. Starting from the Schrodinger picture, transform into a picture defined by the unitary \(U(t):=e^{-i\omega t|0\rangle\langle 0|}\).
3. Apply the rotating wave approximation.
Solution: The first part is straightforward. Then:
In this steady-state basis \(|\tilde 0\rangle,|\tilde 1\rangle\), the density matrix \([\rho_I(t)]_{|\tilde 0\rangle,|\tilde 1\rangle}^{|\tilde 0\rangle,|\tilde 1\rangle}=\frac{1}{2}(1+\tilde{\textbf b}\cdot\boldsymbol{\sigma})\) can be replaced by the conceptually simpler Bloch vector \(\tilde{\textbf b}\in\textbf R^3\) of the qubit whose components \(\tilde{\textbf b}=(\tilde b_1,\tilde b_2,b_3)\) relate back to the matrix elements of the density operator \(\rho_I\) via:
where the populations \(\tilde{\rho}_{00}=\rho_{00},\tilde{\rho}_{11}=\rho_{11}\) are unaffected by the boost (relative to if the matrix elements of \(\rho_I\) were expressed in the \(|0\rangle,|1\rangle\) basis) and the coherences \(\tilde{\rho}_{01}=e^{-i\delta t}\rho_{01},\tilde{\rho}_{10}=e^{i\delta t}\rho_{10}\) are affected:
From Liouville’s equation \(i\hbar\dot{\rho}_I=[H_{\infty},\rho_I]\) and the standard identity of Pauli matrices \([\tilde{\boldsymbol{\Omega}}\cdot\boldsymbol{\sigma},\tilde{\textbf b}\cdot\boldsymbol{\sigma}]=2i(\tilde{\boldsymbol{\Omega}}\times\tilde{\textbf b})\cdot\boldsymbol{\sigma}\), one immediately obtains the precession of the Bloch vector \(\tilde{\textbf b}\) around the generalized Rabi vector \(\tilde{\boldsymbol{\Omega}}\) at the generalized Rabi frequency:
For large detunings \(\delta\), the Bloch vector \(\tilde{\textbf b}\) precesses faster (i.e. one gets faster Rabi oscillations) though at the expense of the maximum excited population \(\text{max}(\rho_{11})=\Omega^2/\tilde{\Omega}^2\) achievable (as already mentioned before). Note also that the generalized Rabi vector \(\tilde{\boldsymbol{\Omega}}\) does depend on when one starts the clock; for instance if \(\textbf E_{\text{ext}}(\textbf x,t)=\textbf E_0\sin(\textbf k_{\text{ext}}\cdot\textbf x-\omega_{\text{ext}}t)\) instead of \(\cos\), then in this case \(\tilde{\boldsymbol{\Omega}}=(0,\Omega,\delta)\) instead, etc.
Optical Bloch Equations
From the stimulated absorption and emission interactions of the atom with the external optical field, Rabi oscillations were seen to emerge. However, Einstein’s statistical argument showed that in addition to stimulated absorption/emission, there is also spontaneous emission. How does this affect the physics? Although a rigorous treatment requires quantizing the EM field, phenomenologically one can simply include a decay term (in the spirit of Einstein) of the form \(-\Gamma\rho_{11}\) (where \(\Gamma=A_{10}\) in the Einstein model) to the rate equation for the excited state population \(\dot{\rho}_{11}=\frac{\Omega}{2}\tilde{b}_2-\Gamma\rho_{11}\). At low laser intensities, \(\Gamma\sim 2\pi\times 10\text{ MHz}\) will in fact typically be greater than the Rabi frequency \(\Omega\), making spontaneous decay an important mechanism by which otherwise coherent Rabi oscillations decohere over time. Although it is immediate that \(\dot b_3=-\Omega\tilde b_2-\Gamma(b_3-1)\), what is not so clear is how \(\tilde b_1,\tilde b_2\) are affected by the spontaneous decay \(\Gamma\neq 0\). By analogy with a classical damped electric dipole, it turns out one can phenomenologically obtain the optical Bloch equations:
Unlike the earlier precession \(\dot{\tilde{\textbf b}}=\tilde{\boldsymbol{\Omega}}\times\tilde{\textbf b}\) in the absence \(\Gamma=0\) of spontaneous emissions, now, in the steady state limit \(t\gg 1/\Gamma\) of long driving times where \(\dot{\tilde b}_1=\dot{\tilde b}_2=\dot b_3=0\), the Bloch vector eventually settles onto:
along with the strongly driven limit \(\rho_{11}\to 1/2\) of the excited state population as \(\Omega\to\infty\). Equivalently, in terms of the spontaneous decay rate \(\gamma:=\Gamma\rho_{11}\), one has:
with \(s=2(\Omega/\Gamma)^2:=I/I_{\text{sat}}\) the normalized intensity. Thus, one’s intuition would suggest that simply cranking up the laser intensity \(s\to\infty\) should excite all atoms into the excited state \(|1\rangle\), the spontaneous decay rate \(\gamma\to\Gamma/2\) prevents one from achieving this.