Gaussians & Feynman Diagrams

Although Feynman diagrams are often first encountered in statistical/quantum field theory contexts where they are employed in perturbative calculations of partition/correlation functions based on Wick’s theorem, there is a lot of “fluff” in these cases that obscures their underlying simplicity. The purpose of this post is therefore to build up to a simpler, intuitive view of what Feynman diagrams are really about that hopefully demystifies them.

Problem #\(1\): Calculate the \(n\)-th moment:

\[\langle x^n\rangle:=\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{\infty}dx x^ne^{-x^2/2\sigma^2}\]

of a univariate normally distributed random variable \(x\) with zero mean \(\langle x\rangle=0\) (the choice of zero mean is motivated by the fact that in practice one only cares about central moments of the distribution, so to avoid writing \(x-\langle x\rangle\) everywhere it is convenient to just set \(\langle x\rangle:=0\)).

Solution #\(1\): It is clear that for odd \(n=1,3,5,…\), the integrand is an odd function so not only is \(\langle x\rangle=0\) by construction, but all higher odd moments also vanish \(\langle x^3\rangle=\langle x^5\rangle=…=0\). As for even \(n=0,2,4,…\), there are several ways:

Way #\(1\): Start with the \(n=0\) normalization (obtained in the usual Poissonian manner):

\[\int_{-\infty}^{\infty}dx e^{-x^2/2\sigma^2}=\sigma\sqrt{2\pi}\]

and differentiate the equation by \(\frac{\partial}{\partial(-1/2\sigma^2)}\) to pull down arbitrarily many factors of \(x^2\). One finds for instance:

\[\langle x^2\rangle=\sigma^2\]

\[\langle x^4\rangle=3\sigma^4\]

\[\langle x^6\rangle=15\sigma^6\]

\[\langle x^8\rangle=105\sigma^8\]

and so forth, in general following the rule \(\langle x^{2m}\rangle=(2m-1)!!\sigma^{2m}\) for even \(n=2m\), where the double factorial can also be written in terms of single factorials as:

\[(2m-1)!!=\frac{(2m)!}{2^mm!}\]

Way #\(2\): Substitute for \(x^2/2\sigma^2\) to recast the integral in terms of a gamma function:

\[\langle x^{2m}\rangle=\frac{(2\sigma^2)^m}{\sqrt{\pi}}\Gamma(m+1/2)\]

where the connection between the gamma function and factorials is well-known for half-integer arguments:

\[\Gamma(m+1/2)=(m-1/2)!=\frac{(2m-1)!!}{2^m}\sqrt{\pi}\]

(this is apparent if one starts with the well-known \((1/2)!=\sqrt{\pi}/2\) and works one’s way up from there).

Way #\(3\): Compute the moment generating function \(\langle e^{\kappa x}\rangle\) of the normal distribution by completing the square:

\[\langle e^{\kappa x}\rangle=e^{\kappa^2\sigma^2/2}\]

And Maclaurin-expand the resulting exponential:

\[e^{\kappa^2\sigma^2/2}=\sum_{m=0}^{\infty}\frac{\sigma^{2m}}{2^mm!}\kappa^{2m}\]

which immediately shows that all odd moments are zero while even moments are:

\[\frac{\langle x^{2m}\rangle}{(2m)!}=\frac{\sigma^{2m}}{2^mm!}\]

Problem #\(2\): From Solution #\(1\), the presence of factorials suggests a combinatorial interpretation of the result; what is this interpretation?

Solution #\(2\): Suppose one has \(6\) people that need to be paired up for a dance; how many pairings can be formed? There are \(2\) ways to think about this.

Way #\(1\): The first person can be paired with \(5\) other people. Then, after they’ve been paired, the next person can only pair up with \(3\) more people. And after they’ve paired, the next person can only pair with the \(1\) other person that’s left. So the answer is \(5!!=5\times 3\times 1=15\) pairs.

Way #\(2\): There are \(6!\) permutations of the \(6\) people. However, they are going to form \(3\) pairs which can be permuted in \(3!\) ways. And within each of the \(3\) pairs, there are a further \(2!=2\) permutations. So in total there will be \(\frac{6!}{2^33!}=15\) pairs.

The fact that Way #\(1\) and Way #\(2\) give the same result is just a restatement of the earlier identity \((2m-1)!!=(2m)!/2^mm!\).

In this case however, the “people” are the powers of \(x\) in \(x^{2m}\)! Because all factors of \(x\) are indistinguishable, thus all \((2m-1)!!\) pairings of the \(2m\) factors of \(x\) in \(x^{2m}\) into \(m\) pairs of \(x^2\) are equivalent. The factor of \(\sigma^{2m}\) then follows on dimensional analysis grounds (it’s the only length scale for the normal distribution) and the numerical coefficient takes on this combinatorial pairing interpretation.

Problem #\(3\): Estimate the expectation \(\langle\cos(x/\sigma)\rangle\) in a univariate normal random variable \(x\) with variance \(\sigma^2\) and zero mean \(\langle x\rangle=0\).

Solution #\(3\): The integral:

\[\biggr\langle\cos\frac{x}{\sigma}\biggr\rangle=\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{\infty}dx\cos\frac{x}{\sigma}e^{-x^2/2\sigma^2}\]

will mostly receive contribution from small \(|x|\leq\sigma\), so one can hope to get a rough estimate of it by Maclaurin-expanding \(\cos\theta=1-\theta^2/2+\theta^4/24-\theta^6/720+…\):

\[\biggr\langle\cos\frac{x}{\sigma}\biggr\rangle\approx 1-\frac{\langle x^2\rangle}{2\sigma^2}+\frac{\langle x^4\rangle}{24\sigma^4}-\frac{\langle x^6\rangle}{720\sigma^6}\]

But these are just the moments that were computed above:

\[=1-\frac{1}{2}+\frac{1}{8}-\frac{1}{48}+…=\sum_{m=0}^{\infty}\frac{(-1/2)^m}{m!}\]

Alternatively, one can evaluate the expectation analytically by writing \(\cos\theta=\Re e^{i\theta}\) and completing the square to obtain:

\[\biggr\langle\cos\frac{x}{\sigma}\biggr\rangle=\frac{1}{\sqrt{e}}\approx 0.60653\]

(or one could have just recognized the earlier Maclaurin series for \(e^{-1/2}\)). So just taking the \(4\)th partial sum \(1-\frac{1}{2}+\frac{1}{8}-\frac{1}{48}=\frac{29}{48}\approx 0.60417\) already gets within \(0.4\%\) of the true answer. Thus, monomials/powers of \(x\) form a basis of analytic functions, and expectation is linear, so by computing all the moments of a distribution, one in principle has access to the expectation of any analytic function with respect to that distribution.

Problem #\(4\): Evaluate the cumulant generating function \(\ln\langle e^{\kappa x}\rangle\) of a univariate normal random variable \(x\) with variance \(\sigma^2\) and zero mean \(\langle x\rangle=0\).

Solution #\(4\): A cinch:

\[\ln\langle e^{\kappa x}\rangle=\ln e^{\kappa^2\sigma^2/2}=\frac{\kappa^2\sigma^2}{2}\]

So it is a quadratic parabola in \(\kappa\) with curvature \(\sigma^2\) at its vertex. The point therefore is that besides the \(2\)nd cumulant \(\sigma^2\), all other cumulants of the normal distribution are vanishing! For instance, the \(3\)rd cumulant (“skewness”) is \(\langle x^3\rangle=0\), the fourth cumulant (“excess kurtosis”) is \(\langle x^4\rangle-3\langle x^2\rangle^2=0\), etc.

Problem #\(4.5\): Another fun application of these ideas: define the \(n\)-th (probabilist’s) Hermite polynomials \(\text{He}_n(x)\) to be the unique degree-\(n\) monic polynomial which is orthogonal to all lower degree Hermite polynomials with respect to the Gaussian weight function \(e^{-x^2/2}\) over the real line \(\textbf R\). Hence, calculate the first \(5\) Hermite polynomials \(\text{He}_0(x),\text{He}_1(x),\text{He}_2(x),\text{He}_3(x),\text{He}_4(x)\).

Solution #\(4.5\): From the definition given above, \(\text{He}_0(x)\) must just be a constant, and the monic requirement fixes this constant to be \(1\); thus \(\text{He}_0(x)=1\). The next Hermite polynomial must have the form \(\text{He}_1(x)=x+c_0\). To fix \(c_0\), one thus requires that (using the fact that inner products with respect to a weight function are identical to expectations of products with respect to the weight function viewed as a probability distribution):

\[\langle\text{He}_1(x/\sigma)\text{He}_0(x/\sigma)\rangle=\langle x/\sigma\rangle+c_0=0\]

so \(c_0=0\) and \(\text{He}_1(x)=x\). Next make the ansatz \(\text{He}_2(x)=x^2+c_1x+c_0\). Enforcing:

\[\langle\text{He}_2(x/\sigma)\text{He}_0(x/\sigma)\rangle=\langle (x/\sigma)^2\rangle+c_1\langle x/\sigma\rangle+c_0=0\Rightarrow c_0=-1\]

\[\langle\text{He}_2(x/\sigma)\text{He}_1(x/\sigma)\rangle=\langle (x/\sigma)^3\rangle+c_1\langle (x/\sigma)^2\rangle+c_0\langle x/\sigma\rangle=0\Rightarrow c_1=0\]

So \(\text{He}_2(x)=x^2-1\). A similar procedure gives \(\text{He}_3(x)=x^3-3x\). At this point to speed oneself up a bit, one could recognize that the Hermite polynomials alternate in parity \(\text{He}_n(-x)=(-1)^n\text{He}_n(x)\), so powers of \(x\) hop by \(2\). This motivates the more intelligent ansatz \(\text{He}_4(x)=x^4+c_2x^2+c_0\), automatically ensuring orthogonality with \(\text{He}_1(x)\) and \(\text{He}_3(x)\). Enforcing orthogonality with \(\text{He}_0(x)\) and \(\text{He}_2(x)\) gives the system of linear equations \(3+c_2+c_0=0\) and \(15+3c_2+c_0=0\) so \(c_0=3\) and \(c_2=-6\) which gives \(\text{He}_4(x)=x^4-6x^2+3\).

(mention the exponential generating function of the Hermite polynomials, and the operator representation, any connections?)

Problem #\(5\): Consider generalizing the prior discussion of a univariate normal random variable \(x\) with variance \(\sigma^2\) and zero mean \(\langle x\rangle=0\) to a \(d\)-dimensional multivariate normal random vector \(\textbf x\in\textbf R^d\) with covariance matrix \(\sigma^2\) and zero mean \(\langle\textbf x\rangle=\textbf 0\). Write down the appropriate normalized probability density function \(\rho(\textbf x)\) for \(\textbf x\).

Solution #\(5\): In analogy with the \(d=1\) univariate normal distribution, one has:

\[\rho(\textbf x)=\frac{1}{\det(\sigma)(2\pi)^{d/2}}\exp\left(-\frac{1}{2}\textbf x^T\sigma^{-2}\textbf x\right)\]

(prove by diagonalizing the covariance matrix \(\sigma^2\) of \(\textbf x\)).

Problem #\(6\): What are the moment and cumulant generating functions of a \(d\)-dimensional multivariate normal random vector \(\textbf x\in\textbf R^d\) with covariance matrix \(\sigma^2\) and zero mean \(\langle\textbf x\rangle=\textbf 0\)?

Solution #\(6\): Again in analogy with \(d=1\):

\[\langle e^{\boldsymbol{\kappa}\cdot\textbf x}\rangle=\exp\left(\frac{1}{2}\boldsymbol{\kappa}^T\sigma^2\boldsymbol{\kappa}\right)\]

\[\ln\langle e^{\boldsymbol{\kappa}\cdot\textbf x}\rangle=\frac{1}{2}\boldsymbol{\kappa}^T\sigma^2\boldsymbol{\kappa}\]

The phrase “moment generating function” is only really appropriate in \(d=1\); this is because in \(d\geq 2\), the generator \(\langle e^{\boldsymbol{\kappa}\cdot\textbf x}\rangle\) for the random vector \(\textbf x=(x_1,x_2,…,x_d)\) generates more than just moments along a given axis like \(\langle x_1^2\rangle, \langle x_2^4\rangle\) but also correlators such as \(\langle x_1x_2^3\rangle,\langle x_1x_2x_3\rangle\), etc. which obviously didn’t exist in \(d=1\). Similar to the univariate case, the even \(\textbf Z_2\) symmetry of the multivariate generator means that only correlators with even powers of \(x_i\) survive, so for instance \(\langle x_1^2x_2x_3\rangle=0\). To compute such strictly even correlators, the quickest way is typically to just compute the relevant term in the Maclaurin expansion of the generator:

\[\exp\left(\frac{1}{2}\boldsymbol{\kappa}^T\sigma^2\boldsymbol{\kappa}\right)=1+\frac{1}{2}\boldsymbol{\kappa}^T\sigma^2\boldsymbol{\kappa}+\frac{1}{8}\boldsymbol{\kappa}^T\sigma^2\boldsymbol{\kappa}^{\otimes 2}\sigma^2\boldsymbol{\kappa}+…\]

Problem #\(7\): Explain why, for an arbitrary analytic random function \(f(\textbf x)\) of an arbitrary (i.e. not necessarily normal) random vector \(\textbf x\), the expectation is:

\[\langle f(\textbf x)\rangle=f\left(\frac{\partial}{\partial\boldsymbol{\kappa}}\right)\langle e^{\boldsymbol{\kappa}\cdot\textbf x}\rangle\biggr|_{\boldsymbol{\kappa}=\textbf 0}\]

This entry was posted in Blog. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *