Gaussian Fields

What they are and how to generate them simply.

Introduction

In Cosmology, we are often interested in understanding the spatial correlations amongst fluctuations of various kinds.

The most well-known example is that of temperature fluctuations in the Cosmic Microwave Background (CMB). The light which travels to us from the primordial universe is very nearly, but not quite, homogeneous. The important deviations from homogeneity are captured by the statistics of the fluctuations about this constant background. For instance, one could choose a fixed angle and average the product of fluctuations separated by this angle over the entire sky. A non-zero result demonstrates correlations between spatially separated regions. Repeating this experiment for various angles reveals the full spatial dependence of such correlations which encodes important information about the early universe.

One can imagine performing the above experiment on the fluctuation map below, which clearly shows fluctuations which are correlated on different spatial scales.

Realization of a scale-invariant Gaussian random field
A realization of a two-dimensional, Gaussian random field with a scale-invariant P(k)\propto k^{-2} power spectrum. The statistics of the realization are fully encoded in the two-point function \langle \phi({\bf x})\phi({\bf y})\rangle, due to its Gaussian nature.

Such fluctuations are described by random fields, which are the generalization of random variables to quantities which take on different values at different points in space (and/or time). Denoting such fields by \phi({\bf x}), their statistics are characterized by correlation functions of the form \langle \phi({\bf x})\phi({\bf y})\rangle (computed as in the CMB thought experiment above), where the angled brackets indicate an expectation value. While we can also consider higher-point correlation functions involving additional powers of the field \phi({\bf x}), there exists a special Gaussian limit in which all higher-order correlators are determined by the two-point function and we will primarily focus on this regime.

Because of the homogeneous and isotropic nature of spatial slices, it’s convenient to discuss the Fourier transform of the correlator which takes on the form \langle\phi_{{\bf k}}\phi_{-{\bf k}}\rangle'=P(k) where the power spectrum, P(k) , only depends on the magnitude of {\bf k}, as indicated (see below for more on the notation). P(k) thus fully determines the properties of homogeneous, isotropic, Gaussian random fields and hence it plays a central role.

However, if handed a P(k), it’s not very easy to visualize what a corresponding spatial map of fluctuations would look like, making it somewhat hard to gain intuition for what a given power spectrum means. It would be nice to know how to generate fluctuation maps of the sort in the above figure.

In this post we answer the following question: for a given P(k) which describes a Gaussian random field, how do we generate a simple fluctuation map which has the correlations described by this power spectrum?

We start by reviewing some of the theory behind random Gaussian fields, which I found difficult to find in any one place in the literature, and then sketch the practical construction of our maps using Mathematica. The full Mathematica code (as well as a numpy implementation in a Juptyer notebook) can be found on my GitHub Page. The methods used for constructing the realization are by no means optimal; our goal is just to understand a quick and dirty method of generating realizations.

Theory Background

We start by reviewing the necessary theory background.

Correlators

First, we set notation and review basic facts about equal time correlators of scalar quantities. We focus only on two-point functions.

A homogeneous, isotropic position-space two-point corrrelator only depends on the magnitude of the distance between the points of interest: \langle \phi({\bf x})\phi({\bf y})\rangle=A(|{\bf x}-{\bf y}|) . Due to this fact, the Fourier transformed correlator is proportional to a delta function and only depends on the magnitude of the momentum (i.e. wavevector):

\langle \phi_{{\bf k}}\phi_{{\bf p}}\rangle\equiv\int{\rm d}^{d}x{\rm d}^{d}y\, e^{-i{\bf k}\cdot{\bf x}-i{\bf p}\cdot{\bf y}}\langle \phi({\bf x})\phi({\bf y})\rangle=P(k)\tilde{\delta}^{d}({\bf k}+{\bf p})\ ,

where we use \tilde{\delta}({\bf k}+{\bf p})\equiv (2\pi)^{d}\delta^{d}({\bf k}+{\bf p}) and later \tilde{{\bf k}}\equiv {\bf k}/(2\pi) to minimize the number of explicit 2\pi factors. appearing. We also put primes on correlators which have their associated delta-functions and 2\pi factors removed, as in \langle \phi_{{\bf k}}\phi_{-{\bf k}}\rangle'=P(k).

Two functional forms of the power spectrum are particularly important: white noise and scale invariant power spectra.

White Noise: P(k)\propto C with C constant. This corresponds to a position-space correlator of the form \langle \phi({\bf x})\phi({\bf y})\rangle\propto C\delta^{d}({\bf x}-{\bf y}) , i.e. there are no correlations between fluctuations at separated points if the power spectrum is described by white noise.

Scale Invariant: P(k)\propto k^{-d} in d spatial dimensions. A scale invariant Inflation predicts a nearly-scale-invariant power spectrum, P(k)\propto k^{-d+n_{s}-1}, where the spectral tilt n_{s}, defined in this manner for historical reasons, is generically predicted to be slightly red, meaning n_{s}-1 is small and negative, so that P(k) grows with decreasing k. A red spectrum corresponds to larger fluctuations at larger physical distance scales. Planck 2018's bounds on the scalar tilt are n_{s}= 0.9649 \pm 0.0042. power spectrum corresponds to fluctuations which have the same correlations no matter how separated the two points are: \langle \phi(\lambda{\bf x})\phi(\lambda{\bf y})\rangle=\langle \phi({\bf x})\phi({\bf y})\rangle for all \lambda.

Probability Distributions: Functions and Functionals

We now review how probability distributions work for continuous, Gaussian random fields. We start with a brief review of the analogous theory for continuous random variables.

Probability Distribution Functions (PDF): Given some continuous random variable q, the probability that q lies in some range {\rm d} q is given by \mathcal{P}(q){\rm d} q, where \mathcal{P}(q) is the PDF. The PDF is normalized as \int\mathcal{P}(q){\rm d} q=1 and the expectation value of q^{n} is given by

\langle q^n\rangle= \int{\rm d} q\, \mathcal{P}(q)q^n

The Gaussian PDF is of primary importance. For n random variables, q_i, the Gaussian PDF takes on the form:

\mathcal{P}^{\rm Gauss.}(q_1,\ldots,q_n)=\sqrt{\frac{\det (A)}{(2\pi)^n}}\exp\left[-\frac{1}{2}q_iA_{ij}q_j\right]\ ,

where A_{ij} is some positive definite matrix (and sums over repeated indices are implied). The corresponding Gaussian two-point statistics are then given by:

\langle q_i q_j\rangle =\left(\prod_{k=1}^{n}\int {\rm d} q_k\right)\mathcal{P}^{\rm Gauss.}(\vec{q})q_iq_j=A^{-1}_{ij}

where A^{-1}_{ij} is the inverse of the matrix A_{ij}. All higher-order correlators involving more factors of q_{i} can be written in terms of A_{ij}, which is a special property of the Gaussian distribution.

Probability Distribution Functionals are perhaps less familiar, but they are exactly the same idea as PDFs simply applied to the case of random fields such as \phi({\bf x}), rather than random variables such as q_i. We will only focus on the case of Gaussian functionals whose form and properties are defined in exact analogy to the PDF case. Starting with a position space field \phi({\bf x}), the Gaussian functional \mathcal{P}[\phi({\bf x})] is written as

\mathcal{P}[\phi({\bf x})]\propto \exp\left[-\frac{1}{2}\int{\rm d}^dx{\rm d}^dy\, \phi({\bf y})A({\bf y},{\bf z})\phi({\bf z}) \right]

where we have omitted the normalization factor. Correlators are then computed by a path integral over all field values \phi({\bf x}). The outcome of the integral is determined in analogy to the PDF case. For instance, the two-point correlator is:

\langle \phi({\bf x})\phi({\bf y})\rangle\equiv \int\mathcal{D}\phi\, \mathcal{P}[\phi({\bf x})]\phi({\bf y})\phi({\bf z})=A^{-1}({\bf y},{\bf z})\ ,

where A^{-1}({\bf y},{\bf z}) obeys

\int{\rm d}^dy\, A({\bf x},{\bf y})A^{-1}({\bf y},{\bf z})=\delta^{d}({\bf x}-{\bf z})\ .

White noise corresponds to the simple case where A({\bf x},{\bf z})=C\delta^{d}({\bf x}-{\bf z}) and so A^{-1}({\bf x},{\bf z})=\frac{1}{C}\delta^{d}({\bf x}-{\bf z}), for instance. Assuming spatial homogeneity and isotropy, it is straightforward to Fourier transform the above formulas in order to derive their momentum-space analogues:

\mathcal{P}[\phi_{{\bf k}}]\propto \exp\left[-\frac{1}{2}\int{\rm d}^{d}\tilde{q}{\rm d}^{d}\tilde{p}\, \phi_{{\bf q}}A(q)\tilde{\delta}^{d}({\bf q}+{\bf p})\phi_{{\bf p}}\right]

and

\langle\phi_{{\bf k}}\phi_{{\bf k}'}\rangle=\frac{\tilde{\delta}^{d}({\bf k}+{\bf k}')}{A(k)}\ .

Clearly, the function A(k) above is simply the inverse of the power spectrum: P(k)=A^{-1}(k). By adding terms cubic and higher order in \phi_{\bf k} to the exponential in \mathcal{P}[\phi_{\bf k}], the distribution is made non-Gaussian. The search for non-Gaussian signatures in Cosmological statistics is part of cutting-edge experimental efforts, but further discussion The Astro2020 Science White Paper on Primordial Non-Gaussianity (which I was involved in) gives a research-level argument for the important role that non-Gaussianity plays in understanding the early universe. A pedagogical introduction to non-Gaussianity in inflation can alternatively be found in Eugene Lim’s excellent notes on the subject. of these issues falls outside of the scope of this post.

Implementation

We now demonstrate how to generate a realization, given P(k). We first present the logic and outline the steps in the process. Then we show how to practically implement these ideas in Mathematica.

The Logic and Steps

The logic is simple: if we’re given a white noise field \varphi_{\bf k} which has \langle \varphi_{{\bf k}}\varphi_{-{\bf k}}\rangle'=1 , then we can create a field with the desired power spectrum simply by defining \phi_{\bf k}\equiv P^{1/2}(k)\varphi_{\bf k}. The new field has the desired statistics, by construction, since

\langle \varphi_{\bf k} \varphi_{-{\bf k}}\rangle'= P(k)\langle \phi_{\bf k} \phi_{-{\bf k}}\rangle'=P(k)\ .

Spelled out in more detail, we will perform the following steps:

  1. Consider a white noise field of unit amplitude: \varphi_{{\bf k}} with \langle \varphi_{{\bf k}}\varphi_{-{\bf k}}\rangle'=1.
  2. Generate a position-space realization of the white noise, denoted by R_{\rm white}({\bf x}). That is, R_{\rm white}({\bf x}) is a particular map showing the values of \varphi({\bf x}) at various positions {\bf x} and for which \langle\varphi({\bf x})\varphi({\bf y})\rangle'=\delta^{d}({\bf x}-{\bf y}).
  3. Fourier transform the realization: R_{\rm white}({\bf x})\longrightarrow R_{\rm white}({\bf k}) .
  4. Multiply R_{\rm white}({\bf k}) by the square root of the power spectrum to create R_P({\bf k})=P^{1/2}(k)R_{\rm white}({\bf k}).
  5. Fourier transform R_P({\bf k}) back to position-space to get the desired realization: R_P({\bf x})=\int{\rm d}^d \tilde{k}\, e^{i{\bf k}\cdot{\bf x}} R_P({\bf x}).

Therefore, given a realization of the white noise field , we just have to multiply its Fourier space realization by and inverse Fourier transform to get a position-space realization of \phi({\bf x}).

The key fact is that generating a white noise realization is easy: since spatially separated points are uncorrelated, white noise is generated by independently drawing each pixel value from a normal distribution.

The above presentation is slightly idealized, as we’ll have to make the realization on a discrete grid; we won’t be able to work with continuous variables. Practical considerations which arise from working on a lattice are covered in the following sections.

From Continuous to Discrete

For concreteness, we focus on creating two-dimensional realizations on N\times N grids, with N an even number. The generalization to higher dimensions is straightforward. In passing to a lattice, our continuous fields \phi({\bf x}) and \phi_{\bf k} are replaced by the discrete quantities \phi^{{\bf x}}_{ab} and \phi^{{\bf k}}_{ab}, respectively, where a,b\in\{0,\ldots,N-1\}. \phi^{{\bf x}}_{ab} is the value of the field at the point (a,b) on the grid. The continuous Fourier transform which related \phi^{{\bf x}}_{ab} and \phi^{{\bf k}}_{ab} is replaced by a discrete one:

\phi^{{\bf k}}_{ab}=\sum_{c,d=0}^{N-1}\exp\left(-ix_ck_a-ix_dk_b\right)\phi^{{\bf x}}_{cd}\ , \quad \phi^{{\bf x}}_{ab}=\frac{1}{N^2}\sum_{c,d=0}^{N-1}\exp\left(ix_ck_a+ix_dk_b\right)\phi^{{\bf k}}_{cd}

where x_a\equiv a and k_a\equiv \frac{2\pi a}{N}. Given real \phi^{{\bf x}}_{ab} and even N, \phi^{{\bf k}}_{ab} has the following properties

The above further imply the following important property: \phi^{*{\bf k}}_{a(N/2+b)}=\phi^{{\bf k}}_{-a(N/2-b)} and similar with the other index.

Building a Realization

Now we walk through the steps involved in actually building a realization in Mathematica.

White Noise

First, we construct a grid of white noise. When passing to the grid, the unit white noise probability distribution functional becomes the following ordinary PDF:

\mathcal{P}[\varphi({\bf x})]\longrightarrow \mathcal{P}[\varphi^{{\bf x}}_{ab}]=\prod_{c,d=0}^{N-1}\frac{\exp\left[-\frac{1}{2}\left(\phi^{{\bf x}}_{cd}\right)^2\right]}{\sqrt{2\pi}}\ .

By drawing pixel values from a normal distribution we can generated a white-noise realization as in Fig. 1 below.

Realization of white noise
Fig. 1: A 1024 x 1024 realization of white noise generated in Mathematica.

In Mathematica, the white noise array can be generated via:

WhiteNoise = RandomVariate[NormalDistribution[], {Nsize, Nsize}]

where Nsize is the size of the grid. WhiteNoise is our realization R_{\rm white}({\bf x}) and building R_{\rm white}({\bf k}) is very simple: we just pass WhiteNoise through the built-in Fourier function:

WhiteNoiseFourier = Fourier[WhiteNoise]

Constructing the Fourier Realization (and Indexing Conventions)

The Fourier space realization then needs to be multiplied by the square root of the power spectrum. Some care must be taken to ensure that the resulting field \phi^{{\bf k}}_{ab} obey the reality condition derived in the previous section, \phi^{*{\bf k}}_{a(N/2+b)}=\phi^{{\bf k}}_{-a(N/2-b)}.

If we did the naive construction and took the white noise realization values \varphi^{{\bf k}}_{ab} and simply multiplied by P^{1/2}(k) with k=\frac{2\pi}{ N}\sqrt{a^2+b^2} to build \phi^{{\bf k}}_{ab}=P^{1/2}(k)\varphi^{{\bf k}}_{ab}, then we'd find \phi^{*{\bf k}}_{a(N/2+b)}\neq\phi^{{\bf k}}_{-a(N/2-b)}, due to the power spectrum factor. Such a \phi^{{\bf k}}_{ab} would correspond to imaginary \phi^{{\bf x}}_{ab}. This constraint can be satisfied by effectively re-indexing the \phi^{{\bf k}}_{ab} such that its momentum-space indices run over the values \{0,\ldots, N/2,-N/2+1,\ldots,-1\}, rather than \{0,\ldots, N-1\}, with all factors of {\bf k} evaluated at the corresponding index value. More explicitly, we write the full tensor as

\phi^{{\bf k}}_{ab}= \begin{cases} P^{1/2}(k)\varphi^{{\bf k}}_{ab} & a,b\le N/2\\ P^{1/2}(k)\varphi^{{\bf k}}_{a(b-N)} & a\le N/2, b>N/2\\ P^{1/2}(k)\varphi^{{\bf k}}_{(a-N)b} & a> N/2, b\le N/2\\ P^{1/2}(k)\varphi^{{\bf k}}_{(a-N)(b-N)} & a,b> N/2 \end{cases}\ .

where in the second line above, the k in P^{1/2}(k)\varphi^{{\bf k}}_{a(b-N)} is evaluated at k=\frac{2\pi}{N}\sqrt{a^2+(b-N)^2} and similar for other lines. The factors of N can also be removed in the indices of the \varphi^{{\bf k}}'s, using the previously mentioned properties of \varphi^{{\bf k}}.

The above index conventions for discrete Fourier transforms are standard. For instance, Mathematica and numpy both use this convention.

This shuffling of indices has no effect on the transformation back to position-space. For instance, in the limiting case where P(k)\longrightarrow 1 the inverse transformations of both the re-indexed \phi^{{\bf k}}_{ab} and the original \varphi^{{\bf k}}_{ab} produce the same white noise realization \varphi^{{\bf x}}_{ab}.

The position-space field is then built via

\phi^{{\bf x}}_{ab}=\frac{1}{N^2}\sum_{c,d=0}^{N-1}\exp\left(ix_ck_a+ix_dk_b\right)\phi^{{\bf k}}_{cd}\ .

This field realizes the desired spectrum.

Carrying out the above in Mathematica, we first build the vector of momenta with the indexing discussed above, k_a=\frac{2\pi}{N}(0,\ldots, N/2,-N/2+1,\ldots, -1), via:

kVector = N[(2*Pi/Nsize) * Join[Range[0, Nsize/2], Range[-Nsize/2+1, -1]]]

Next, we build the power spectrum function. Take it to be of the power law form: P(k)\propto k^{-n}. For n>0, P(k) is divergent at k=0 and we handle this by regularizing the spectrum to zero at this one point:

PowerLawPowerSpectrum[n_, k_] := If[k==0 && n>0, 0, 1/Abs[k^n]]

We want to evaluate P^{1/2}(k) at all points (k_a,k_b) with k=\sqrt{k_a^2+k_b^2}. This array is constructed using the Outer command:

sqrtPowerSpectrumArray = Outer[N[Sqrt[PowerLawPowerSpectrum[n, Norm[{##}]]]]&, kVector, kVector]

where n should be replaced by the desired number determining the power law.

Finally, we mutiply sqrtPowerSpectrumArray by WhiteNoiseFourier element-wise to get \phi^{{\bf k}}_{ab}:

PowerSpectrumRealizationFourier = sqrtPowerSpectrumArray * WhiteNoiseFourier

Note that the above multiplication is carried out with the Times function, not Dot.

Plotting the Realization

We can transform back to position space, \phi^{{\bf k}}_{ab}\longrightarrow \phi^{{\bf x}}_{ab}, and plot the realization via

MatrixPlot[Re[InverseFourier[PowerSpectrumRealizationFourier]]]

Above, the real part of the transform was taken using Re because numerical errors induce tiny imaginary components in the final calculation of \phi^{{\bf x}}_{ab}. The results of the realization for various power spectra can be seen in Fig. 2.

Five realizations of different power-law power spectra
Fig. 2: Realizations for P(k)=k^{-n} with n\in\{0,1,2,3,4\} from left to right. The n=0 realization is white noise while the n=2 realization is scale invariant. The length scale over which fluctuations are correlated increases as n increases.

An Example in d=3

Finally, we present the results of a higher dimensional realization: Fig. 3 is a GIF which shows successive slices of a d=3 realization for a power spectrum P(k)=k^{-4}.

Gif stepping through cross-sections of 3D power spectum realization
Fig. 3: Slices of a d=3, P(k)\propto k^{-4} realization. A scale invariant power spectrum P(k)\propto k^{-3} is more physically relevant, e.g. for inflation, but leads to a less interesting GIF.

Acknowledgements

Thank you to the Distill team for making their article template publicly available.

Additional Links