Don’t Use Gaussians in Stochastic Trace Estimation

Suppose we are interested in estimating the trace \tr(A) = \sum_{i=1}^n A_{ii} of an n\times n matrix A that can be only accessed through matrix–vector products Ax_1,\ldots,Ax_m. The classical method for this purpose is the GirardHutchinson estimator

    \[\hat{\tr} = \frac{1}{m} \left( x_1^\top Ax_1 + \cdots + x_m^\top Ax_m \right),\]

where the vectors x_1,\ldots,x_m are independent, identically distributed (iid) random vectors satisfying the isotropy condition

    \[\expect[x_ix_i^\top] = I.\]

Examples of vectors satisfying this condition include

Stochastic trace estimation has a number of applications: log-determinant computations in machine learningpartition function calculations in statistical physicsgeneralized cross validation for smoothing splines, and triangle counting in large networks. Several improvements to the basic Girard–Hutchinson estimator have been developed recently. I am partial to XTrace, an improved trace estimator that I developed with my collaborators.

This post is addressed at the question:

Which distribution should be used for the test vectors x_i for stochastic trace estimation?

Since the Girard–Hutchinson estimator is unbiased \expect[\hat{\tr}] = \tr(A), the variance of \hat{\tr} is equal to the mean-square error. Thus, the lowest variance trace estimate is the most accurate. In my previous post on trace estimation, I discussed formulas for the variance \Var(\hat{\tr}) of the Girard–Hutchinson estimator with different choices of test vectors. In that post, I stated the formulas for different choices of test vectors (Gaussian, random signs, sphere) and showed how those formulas could be proven.

In this post, I will take the opportunity to editorialize on which distribution to pick. The thesis of this post is as follows:

The sphere distribution is essentially always preferable to the Gaussian distribution for trace estimation.

To explain why, let’s focus on the case when A is real and symmetric.1The same principles hold in the general case, but the variance formulas are more delicate to state. See my previous post for the formulas. Let \lambda_1,\ldots,\lambda_n be the eigenvalues of A and define the eigenvalue mean

    \[\overline{\lambda} = \frac{\lambda_1 + \cdots + \lambda_n}{n}.\]

Then the variance of the Girard–Hutchinson estimator with Gaussian vectors x_i is

    \[\Var(\hat{\tr}_{\rm Gaussian}) = \frac{1}{m} \cdot 2 \sum_{i=1}^n \lambda_i^2.\]

For vectors x_i drawn from the sphere, we have

    \[\Var(\hat{\tr}_{\rm sphere}) = \frac{1}{m} \cdot \frac{n}{n+2} \cdot 2\sum_{i=1}^n (\lambda_i - \overline{\lambda})^2.\]

The sphere distribution improves on the Gaussian distribution in two ways. First, the variance of \Var(\hat{\tr}_{\rm sphere}) is smaller than \Var(\hat{\tr}_{\rm Gaussian})by a factor of n/(n+2) < 1. This improvement is quite minor. Second, and more importantly, \Var(\hat{\tr}_{\rm Gaussian}) is proportional to the sum of A‘s squared eigenvalues whereas \Var(\hat{\tr}_{\rm sphere}) is proportional to the sum of A‘s squared eigenvalues after having been shifted to be mean-zero!

The difference between Gaussian and sphere test vectors can be large. To see this, consider a 1000\times 1000 matrix A with eigenvalues uniformly distributed between 0.9 and 1.1 with a (Haar orthgonal) random matrix of eigenvectors. For simplicity, since the variance of all Girard–Hutchinson estimates is proportional to 1/m, we take m=1. Below show the variance of Girard–Hutchinson estimator for different distributions for the test vector. We see that the sphere distribution leads to a trace estimate which has a variance 300× smaller than the Gaussian distribution. For this example, the sphere and random sign distributions are similar.

DistributionVariance (divided by \tr(A)^2)
Gaussian2.0\times 10^{-3}
Sphere6.7\times 10^{-6}
Random signs6.7\times 10^{-6}

Which Distribution Should You Use: Signs vs. Sphere

The main point of this post is to argue against using the Gaussian distribution. But which distribution should you use: Random signs? The sphere distribution? The answer, for most applications, is one of those two, but exactly which depends on the properties of the matrix A.

The variance of the Girard–Hutchinson estimator with the random signs estimator is

    \[\Var(\hat{\tr}_{\rm signs}) = 2 \sum_{i\ne j} A_{ij}^2.\]

Thus, \Var(\hat{\tr}_{\rm signs}) depends on the size of the off-diagonal entries of A; \Var(\hat{\tr}_{\rm signs}) does not depend on the diagonal of A at all! For matrices with small off-diagonal entries (such as diagonally dominant matrices), the random signs distribution is often the best.

However, for other problems, the sphere distribution is preferable to random signs. The sphere distribution is rotation-invariant, so \Var(\hat{\tr}_{\rm sphere}) is independent of the eigenvectors of the (symmetric) matrix A, depending only on A‘s eigenvalues. By contrast, the variance of the Girard–Hutchinson estimator with the random signs distribution can significantly depend on the eigenvectors of the matrix A. For a given set of eigenvalues and the worst-case choice of eigenvectors, \Var(\hat{\tr}_{\rm sphere}) will always be smaller than \Var(\hat{\tr}_{\rm signs}). In fact, \Var(\hat{\tr}_{\rm sphere}) is the minimum variance distribution for Girard–Hutchinson trace estimation for a matrix with fixed eigenvalues and worst-case eigenvectors; see this section of my previous post for details.

In my experience, random signs and the sphere distribution are both perfectly adequate for trace estimation and either is a sensible default if you’re developing software. The Gaussian distribution on the other hand… don’t use it unless you have a good reason to.

How Good Can Stochastic Trace Estimates Be?

I am excited to share that our paper XTrace: Making the most of every sample in stochastic trace estimation has been published in the SIAM Journal on Matrix Analysis and Applications. (See also our paper on arXiv.)

Spurred by this exciting news, I wanted to take the opportunity to share one of my favorite results in randomized numerical linear algebra: a “speed limit” result of Meyer, Musco, Musco, and Woodruff that establishes a fundamental limitation on how accurate any trace estimation algorithm can be.

Let’s back up. Given an unknown square matrix A, the trace of A, defined to be the sum of its diagonal entries

    \[\tr(A) \coloneqq \sum_{i=1}^n A_{ii}.\]

The catch? We assume that we can only access the matrix A through matrix–vector products (affectionately known as “matvecs”): Given any vector x, we have access to Ax. Our goal is to form an estimate \hat{\tr} that is as accurate as possible while using as few matvecs as we can get away with.

To simplify things, let’s assume the matrix A is symmetric and positive (semi)definite. The classical algorithm for trace estimation is due to Girard and Hutchinson, producing a probabilistic estimate \hat{\tr} with a small average (relative) error:

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^2} \text{ matvecs}.\]

If one wants high accuracy, this algorithm is expensive. To achieve just a 1% error (\varepsilon=0.01) requires roughly m=10,\!000 matvecs!

This state of affairs was greatly improved by Meyer, Musco, Musco, and Woodruff. Building upon previous work, they proposed the Hutch++ algorithm and proved it outputs an estimate \hat{\tr} satisfying the following bound:

(1)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon} \text{ matvecs}.\]

Now, we only require roughly m=100 matvecs to achieve 1% error! Our algorithm, XTrace, satisfies the same error guarantee (1) as Hutch++. On certain problems, XTrace can be quite a bit more accurate than Hutch++.

The MMMW Trace Estimation “Speed Limit”

Given the dramatic improvement of Hutch++ and XTrace over Girard–Hutchinson, it is natural to hope: Is there an algorithm that does even better than Hutch++ and XTrace? For instance, is there an algorithm satisfying an even slightly better error bound of the form

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}?\]

Unfortunately not. Hutch++ and XTrace are essentially as good as it gets.

Let’s add some fine print. Consider an algorithm for the trace estimation problem. Whenever the algorithm wants, it can present a vector x_i and receive back Ax_i. The algorithm is allowed to be adaptive: It can use the matvecs Ax_1,\ldots,Ax_s it has already collected to decide which vector x_{s+1} to present next. We measure the cost of the algorithm in terms of the number of matvecs alone, and the algorithm knows nothing about the psd matrix A other what it learns from matvecs.

One final stipulation:

Simple entries assumption. We assume that the entries of the vectors x_i presented by the algorithm are real numbers between -1 and 1 with up to b digits after the decimal place.

To get a feel for this simple entries assumption, suppose we set b=2. Then (-0.92,0.17) would be an allowed input vector, but (0.232,-0.125) would not be (too many digits after the decimal place). Similarly, (18.3,2.4) would not be valid because its entries exceed 1. The simple entries assumption is reasonable as we typically represent numbers on digital computers by storing a fixed number of digits of accuracy.1We typically represent numbers on digital computers by floating point numbers, which essentially represent numbers using scientific notation like 1.3278123 \times 10^{27}. For this analysis of trace estimation, we use fixed point numbers like 0.23218 (no powers of ten allowed)!

With all these stipulations, we are ready to state the “speed limit” for trace estimation proved by Meyer, Musco, Musco, and Woodruff:

Informal theorem (Meyer, Musco, Musco, Woodruff). Under the assumptions above, there is no trace estimation algorithm producing an estimate \hat{\tr} satisfying

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}.\]

We will see a slightly sharper version of the theorem below, but this statement captures the essence of the result.

Communication Complexity

To prove the MMMW theorem, we have to take a journey to the beautiful subject of communication complexity. The story is this. Alice and Bob are interested in solving a computational problem together. Alice has her input x and Bob has his input y, and they are interested in computing a function f(x,y) of both their inputs.

Unfortunately for the two of them, Alice and Bob are separated by a great distance, and can only communicate by sending single bits (0 or 1) of information over a slow network connection. Every bit of communication is costly. The field of communication complexity is dedicated to determining how efficiently Alice and Bob are able to solve problems of this form.

The Gap-Hamming problem is one example of a problem studied in communication complexity. As inputs, Alice and Bob receive vectors x,y \in \{\pm 1\}^n with +1 and -1 entries from a third party Eve. Eve promises Alice and Bob that their vectors x and y satisfy one of two conditions:

(2)   \[\text{Case 0: } x^\top y \ge\sqrt{n} \quad \text{or} \quad \text{Case 1: } x^\top y \le -\sqrt{n}. \]

Alice and Bob must work together, sending as few bits of communication as possible, to determine which case they are in.

There’s one simple solution to this problem: First, Bob sends his whole input vector y to Alice. Each entry of y takes one of the two value \pm 1 and can therefore be communicated in a single bit. Having received y, Alice computes x^\top y, determines whether they are in case 0 or case 1, and sends Bob a single bit to communicate the answer. This procedure requires n+1 bits of communication.

Can Alice and Bob still solve this problem with many fewer than n bits of communication, say \sqrt{n} bits? Unfortunately not. The following theorem of Chakrabati and Regev shows that roughly n bits of communication are needed to solve this problem:

Theorem (Chakrabati–Regev). Any algorithm which solves the Gap-Hamming problem that succeeds with at least 2/3 probability for every pair of inputs x and y (satisfying one of the conditions (2)) must take \Omega(n) bits of communication.

Here, \Omega(n) is big-Omega notation, closely related to big-O notation \order(n) and big-Theta notation \Theta(n). For the less familiar, it can be helpful to interpret \Omega(n), \order(n), and \Theta(n) as all standing for “proportional to n”. In plain language, the theorem of Chakrabati and Regev result states that there is no algorithm for the Gap-Hamming problem that much more effective than the basic algorithm where Bob sends his whole input to Alice (in the sense of requiring less than \order(n) bits of communication).

Reducing Gap-Hamming to Trace Estimation

This whole state of affairs is very sad for Alice and Bob, but what does it have to do with trace estimation? Remarkably, we can use hardness of the Gap-Hamming problem to show there’s no algorithm that fundamentally improves on Hutch++ and XTrace. The argument goes something like this:

  1. If there were a trace estimation algorithm fundamentally better than Hutch++ and XTrace, we could use it to solve Gap-Hamming in fewer than \order(n) bits of communication.
  2. But no algorithm can solve Gap-Hamming in fewer than \order(n) bits or communication.
  3. Therefore, no trace estimation algorithm is fundamentally better than Hutch++ and XTrace.

Step 2 is the work of Chakrabati and Regev, and step 3 follows logically from 1 and 2. Therefore, we are left to complete step 1 of the argument.

Protocol

Assume we have access to a really good trace estimation algorithm. We will use it to solve the Gap-Hamming problem. For simplicity, assume n is a perfect square. The basic idea is this:

  • Have Alice and Bob reshape their inputs x,y \in \{\pm 1\}^n into matrices X,Y\in\{\pm 1\}^{\sqrt{n}\times \sqrt{n}}, and consider (but do not form!) the positive semidefinite matrix

        \[A = (X+Y)^\top (X+Y).\]

  • Observe that

        \[\tr(A) = \tr(X^\top X) + 2\tr(X^\top Y) + \tr(Y^\top Y) = 2n + 2(x^\top y).\]

    Thus, the two cases in (2) can be equivalently written in terms of \tr(A):

    (2′)   \[\text{Case 0: } \tr(A)\ge 2n + 2\sqrt{n} \quad \text{or} \quad \text{Case 1: } \tr(A) \le 2n-2\sqrt{n}. \]

  • By working together, Alice and Bob can implement a trace estimation algorithm. Alice will be in charge of running the algorithm, but Alice and Bob must work together to compute matvecs. (Details below!)
  • Using the output of the trace estimation algorithm, Alice determines whether they are in case 0 or 1 (i.e., where \tr(A) \gg 2n or \tr(A) \ll 2n) and sends the result to Bob.

To complete this procedure, we just need to show how Alice and Bob can implement the matvec procedure using minimal communication. Suppose Alice and Bob want to compute Az for some vector z with entries between -1 and 1 with up to b decimal digits. First, convert z to a vector w\coloneqq 10^b z whose entries are integers between -10^b and 10^b. Since Az = 10^{-b}Aw, interconverting between Az and Aw is trivial. Alice and Bob’s procedure for computing Aw is as follows:

  • Alice sends Bob w.
  • Having received w, Bob forms Yw and sends it to Alice.
  • Having received Yw, Alice computes v\coloneqq Xw+Yw and sends it to Bob.
  • Having received v, Bob computes Y^\top v and sends its to Alice.
  • Alice forms Aw = X^\top v + Y^\top v.

Because X and Y are \sqrt{n}\times \sqrt{n} and have \pm 1 entries, all vectors computed in this procedure are vectors of length \sqrt{n} with integer entries between -4n 10^b and 4n10^b. We conclude the communication cost for one matvec is T\coloneqq\Theta((b+\log n)\sqrt{n}) bits.

Analysis

Consider an algorithm we’ll call BestTraceAlgorithm. Given any accuracy parameter \varepsilon > 0, BestTraceAlgorithm requires at most m = m(\varepsilon) matvecs and, for any positive semidefinite input matrix A of any size, produces an estimate \hat{\tr} satisfying

(3)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon.\]

We assume that BestTraceAlgorithm is the best possible algorithm in the sense that no algorithm can achieve (3) on all (positive semidefinite) inputs with m' < m matvecs.

To solve the Gap-Hamming problem, Alice and Bob just need enough accuracy in their trace estimation to distinguish between cases 0 and 1. In particular, if

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}},\]

then Alice and Bob can distinguish between cases 0 and 1 in (2′)

Suppose that Alice and Bob apply trace estimation to solve the Gap-Hamming problem, using m matvecs in total. The total communication is m\cdot T = \order(m(b+\log n)\sqrt{n}) bits. Chakrabati and Regev showed that Gap-Hamming requires cn bits of communication (for some c>0) to solve the Gap-Hamming problem with 2/3 probability. Thus, if m\cdot T < cn, then Alice and Bob fail to solve the Gap-Hamming problem with at least 1/3 probability. Thus,

    \[\text{If } m < \frac{cn}{T} = \Theta\left( \frac{\sqrt{n}}{b+\log n} \right), \quad \text{then } \left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| > \frac{1}{\sqrt{n}} \text{ with probability at least } \frac{1}{3}.\]

The contrapositive of this statement is that if

    \[\text{If }\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}}\text{ with probability at least } \frac{2}{3}, \quad \text{then } m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right).\]


Say Alice and Bob run BestTraceAlgorithm with parameter \varepsilon = \tfrac{1}{3\sqrt{n}}. Then, by (3) and Markov’s inequality,

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}} \quad \text{with probability at least }\frac{2}{3}.\]

Therefore, BestTraceAlgorithm requires at least

    \[m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right) \text{ matvecs}.\]

Using the fact that we’ve set \varepsilon = 1/3\sqrt{n}, we conclude that any trace estimation algorithm, even BestTraceAlgorithm, requires

    \[m \ge \Theta \left( \frac{1}{\varepsilon (b+\log(1/\varepsilon))} \right) \text{ matvecs}.\]

In particular, no trace estimation algorithm can achieve mean relative error \varepsilon using even \order(1/\varepsilon^{0.999}) matvecs. This proves the MMMW theorem.

Five Interpretations of Kernel Quadrature

I’m excited to share that my paper Kernel quadrature with randomly pivoted Cholesky, joint with Elvira Moreno, has been accepted to NeurIPS 2023 as a spotlight.

Today, I want to share with you a little about the kernel quadrature problem. To avoid this post getting too long, I’m going to write this post assuming familiarity with the concepts of reproducing kernel Hilbert spaces and Gaussian processes.

Integration and Quadrature

Integration is one of the most widely used operations in mathematics and its applications. As such, it is a basic problem of wide interest to develop numerical methods for evaluating integrals.

In this post, we will consider a quite general integration problem. Let \Omega\subseteq \real^d be a domain and let \mu be a (finite Borel) measure on \Omega. We consider the task of evaluating

    \[I[f] = \int_\Omega f(x) g(x) \, \mathrm{d}\mu(x).\]

One can imagine that g, \mu, and \Omega are fixed, but we may want to evaluate this same integral I[f] for multiple different functions f.

To evaluate, we will design a quadrature approximation to the integral I[f]:

    \[\hat{I}_{w,s}[f] = \sum_{i=1}^n w_i f(s_i) \approx I[f].\]

Concretely, we wish to find real numbers w = (w_1,\ldots,w_n) \in \real^n and points s = (s_1,\ldots,s_n) \in \Omega^n such that the approximation \hat{I}_{w,s}[f] \approx I[f] is accurate.

Smoothness and Reproducing Kernel Hilbert Spaces

As is frequently the case in computational mathematics, the accuracy we can expect for this integration problem depends on the smoothness of the integrand f. The more smooth f is, the more accurately we can expect to compute I[f] for a given budget of computational effort.

In this post, will measure smoothness using the reproducing kernel Hilbert space (RKHS) formalism. Let \mathcal{H} be an RKHS with norm \norm{\cdot}. We can interpret the norm as assigning a roughness \norm{f} to each function f. If \norm{f} is large, then f is rough; if \norm{f} is small, then f is smooth.

Associated to the RKHS \mathcal{H} is the titular reproducing kernel k. The kernel is a bivariate function k:\Omega\times\Omega\to\real. It is related to the RKHS inner product \langle\cdot,\cdot\rangle by the reproducing property

    \[f(x)=\langle f, k(x,\cdot)\rangle \quad \text{for every }f\in\mathcal{H},x\in\Omega.\]

Here, k(x,\cdot) represents the univariate function obtained by setting the first input of k to be x.

The Ideal Weights

To design a quadrature rule, we have to set the nodes s = (s_1,\ldots,s_n) \in \Omega^n and weights w = (w_1,\ldots,w_n)\in\real^n. Let’s first assume that the nodes s are fixed, and talk about how to pick the weights w.

There’s one choice of weights w^\star that we’ll called the ideal weights. There (at least) are five equivalent ways of characterizing the ideal weights. We’ll present all of them. As an exercise, you can try and convince yourself that these characterizations are equivalent, giving rise to the same weights.

Interpretation 1: Exactness

A standard way of designing quadrature rules is to make them exact (i.e., error-free) for some class of functions. For instance, many classical quadrature rules are exact for polynomials of degree up to n-1.

For kernel quadrature, it makes sense to design the quadrature rule to be exact for the kernel function at the selected nodes. That is, we require

    \[\hat{I}_{w_\star,s}[k(s_i,\cdot)]=I[k(s_i,\cdot)] \quad \text{for } i=1,2,\ldots,n.\]

Enforcing exactness gives us n linear equations for the n unknowns w^\star_1,\ldots,w^\star_n:

    \[\sum_{j=1}^n k(s_i,s_j)w^\star_j = \int_\Omega k(s_i,x) g(x)\,\mathrm{d}\mu(x) \quad \text{for }i=1,2,\ldots,n.\]

Under mild conditions, this system of linear equations is uniquely solvable, and the solution w^\star\in\real^n is the ideal weights.

Interpretation 2: Interpolate and Integrate

Here’s another very classical way of designing a quadrature rule. First, interpolate the function values (s_i,f(s_i)) at the nodes, obtaining an interpolant \hat{f}. Then, obtain an approximation to the integral by integrating the interpolant:

    \[\hat{I}_{w^\star,s}[f] \coloneqq \int_\Omega \hat{f}(x) g(x) \, \mathrm{d}\mu(x).\]


In our context, the appropriate interpolation method is kernel interpolation.1Kernel interpolation is also called Gaussian process regression or kriging though (confusingly) these terms can also refer to slightly different methods. It is the regularization-free limit of kernel ridge regression. The kernel interpolant is defined to be the minimum-norm function \hat{f} that interpolates the data:

    \[\hat{f} = \argmin \{ \norm{h} : h(s_i) = f(s_i) \text{ for } i=1,\ldots,n\}.\]

Remarkably, this infinite-dimensional problem has a tractably computable solution. In fact, \hat{f} is the unique function of the form

    \[\hat{f} = \sum_{i=1}^n \alpha_i k(\cdot,s_i)\]

that agrees with f on the points s_1,\ldots,s_n.With a little algebra, you can show that the integral of \hat{f} is

    \[I[\hat{f}] = \sum_{i=1}^n w^\star_i f(s_i),\]

where w^\star are the ideal weights.

Interpretation 3: Minimizing the Worst-Case Error

Define the worst-case error of weights w and nodes s to be

    \[\operatorname{Err}(w,s)=\sup_{\norm{f}\le 1}\left| I[f] - \hat{I}_{w,s}[f]\right|.\]

The quantity \operatorname{Err}(w,s) is the highest possible quadrature error for a function f\in\mathcal{H} of norm at most 1.

Having defined the worst-case error, the ideal weights are precisely the weights that minimize this quantity

    \[w^\star=\operatorname*{argmin}_{w\in\real^n}\operatorname{Err}(w,s).\]

Interpretation 4: Minimizing the Mean-Square Error

The next two interpretations of the ideal weights will adopt a probabilistic framing. A Gaussian process is a random function f such that f’s values at any collection of points are (jointly) Gaussian random variables. We write f\sim \operatorname{GP}(0,k) for a mean-zero Gaussian process with covariance function k:

    \[\Cov(f(x),f(y))=k(x,y)\quad \text{for every } x,y\in\Omega.\]


Define the mean-square quadrature error of weights w and nodes s to be

    \[\operatorname{MSE}(w,s)\coloneqq \expect_{f\sim\operatorname{GP}(0,k)} \left( I[f] - \hat{I}_{w,s}[f] \right)^2.\]

The mean-square error reports the expected squared quadrature error over all functions f drawn from a Gaussian process with covariance function k.

Pleasantly, the mean-square error is equal ro the square of the worst-case error

    \[\operatorname{MSE}(w,s)=\operatorname{Err}(w,s)^2.\]

As such, the ideal weights also minimize the mean-square error

    \[w^\star=\operatorname*{argmin}_{w\in\real^n}\operatorname{MSE}(w,s).\]

Interpretation 5: Conditional Expectation

For our last interpretation, again consider a Gaussian process h\sim \operatorname{GP}(0,k). The integral of this random function I[h] is a random variable. To numerically integrate a function f, compute the expectation of I[h] conditional on h agreeing with f at the quadrature nodes:

    \[\hat{I}_{w^\star,s}[f]\coloneqq \expect_{h\sim\operatorname{GP}(0,k)}[I[h] \mid h(s_i)=f(s_i) \text{ for } i=1,\ldots,n].\]

One can show that this procedure yields the quadrature scheme with the ideal weights.

Conclusion

We’ve just seen five sensible ways of defining the ideal weights for quadrature in a general reproducing kernel Hilbert space. Remarkably, all five lead to exactly the same choice of weights. To me, these five equivalent characterizations give me more confidence that the ideal weights really are the “right” or “natural” choice for kernel quadrature.

That said, there are other reasonable requirements that we might want to impose on the weights. For instance, if \mu is a probability measure and g\equiv 1, it is reasonable to add an additional constraint that the weights w lie in the probability simplex

    \[w\in\Delta\coloneqq \left\{ p\in\real^n_+ : \sum_{i=1}^n p_i = 1\right\}.\]

With this additional stipulation, a quadrature rule can be interpreted as integrating f against a discrete probability measure \ \hat{\mu}=\sum_{i=1}^n w_i\delta_{s_i}; thus, in effect, quadrature amounts to approximating one probability measure \mu by another \hat{\mu}. Additional constraints such as these can easily be imposed when using the optimization characterizations 3 and 4 of the ideal weights. See this paper for details.

What About the Nodes?

We’ve spent a lot of time talking about how to pick the quadrature weights, but how should we pick the nodes s\in\Omega^n? To pick the nodes, it seems sensible to try and minimize the worst-case error \operatorname{Err}(w^\star,s) with the ideal weights w^\star. For this purpose, we can use the following formula:

    \[\operatorname{Err}(w^\star,s) = \norm{\int_\Omega (k(\cdot,x) - \hat{k}_s(\cdot,x)) g(x) \, \mathrm{d}\mu(x)}.\]

Here, \hat{k}_s is the Nyström approximation to the kernel k induced by the nodes s, defined to be

    \[\hat{k}_s(x,y) = k(x,s) k(s,s)^{-1} k(s,y).\]

We have written k(s,s) for the kernel matrix with ij entry k(s_i,s_j) and k(x,s) and k(s,y) for the row and column vectors with ith entry k(x,s_i) and k(s_i,y).

I find the appearance of the Nyström approximation in this context to be surprising and delightful. Previously on this blog, we’ve seen (column) Nyström approximation in the context of matrix low-rank approximation. Now, a continuum analog of the matrix Nyström approximation has appeared in the error formula for numerical integration.

The appearance of the Nyström approximation in the kernel quadrature error \operatorname{Err}(w^\star,s) also suggests a strategy for picking the nodes.

Node selection strategy. We should pick the nodes s to make the Nyström approximation \hat{k}_s \approx k as accurate as possible.

The closer \hat{k}_s is to k, the smaller the function k(\cdot,x) - \hat{k}_s(\cdot,x) is and, thus, the smaller the error

    \[\operatorname{Err}(w^\star,s) = \norm{\int_\Omega (k(\cdot,x) - \hat{k}_s(\cdot,x)) g(x) \, \mathrm{d}\mu(x)}.\]

Fortunately, we have randomized matrix algorithms for picking good nodes for matrix Nyström approximation such as randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling; maybe these matrix tools can be ported to the continuous kernel setting?

Indeed, all three of these algorithms—randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling—have been studied for kernel quadrature. The first of these algorithms, randomly pivoted Cholesky, is the subject of our paper. We show that this simple, adaptive sampling method produces excellent nodes for kernel quadrature. Intuitively, randomly pivoted Cholesky is effective because it is repulsive: After having picked nodes s_1,\ldots,s_i, it has a high probability of placing the next node s_{i+1} far from the previously selected nodes.

The following image shows 20 nodes selected by randomly pivoted Cholesky in a crescent-shaped region. The cyan–pink shading denotes the probability distribution for picking the next node. We see that the center of the crescent does not have any nodes, and thus is most likely to receive a node during the next round of sampling.

In our paper, we demonstrate—empirically and theoretically—that randomly pivoted Cholesky produces excellent nodes for quadrature. We also discuss efficient rejection sampling algorithms for sampling nodes with the randomly pivoted Cholesky distribution. Check out the paper for details!

Which Sketch Should I Use?

This is the second of a sequence of two posts on sketching, which I’m doing on the occasion of my new paper on the numerical stability of the iterative sketching method. For more on what sketching is and how it can be used to solve computational problems, I encourage you to check out the first post.

The goals of this post are more narrow. I seek to answer the question:

Which sketching matrix should I use?

To cut to the chase, my answer to this question is:

Sparse sign embeddings are a sensible default for sketching.

There are certainly cases when sparse sign embeddings are not the best type of sketch to use, but I hope to convince you that they’re a good sketching matrix to use for most purposes.

Experiments

Let’s start things off with some numerical experiments.1Code for all numerical experiments can be found on the blogpost branch of the Github for my recent paper. We’ll compare three types of sketching matrices: Gaussian embeddings, a subsampled randomized trigonometric transform (SRTT), and sparse sign embeddings. See the last post for descriptions of these sketching matrices. I’ll discuss a few additional types of sketching matrices that require more discussion at the end of this post.

Recall that a sketching matrix S \in \real^{d\times n} seeks to compress a high-dimensional matrix A \in \real^{n\times k} or vector b\in\real^n to a lower-dimensional sketched matrix SA or vector Sb. The quality of a sketching matrix for a matrix A is measured by its distortion \varepsilon, defined to be the smallest number \varepsilon > 0 such that

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for every } x \in \operatorname{col}(A).\]

Here, \operatorname{col}(A) denotes the column space of the matrix A.

Timing

We begin with timing test. We will measure three different times for each embedding:

  1. Construction. The time required to generate the sketching matrix S.
  2. Vector apply. The time to apply the sketch to a single vector.
  3. Matrix apply. The time to apply the sketch to an n\times 200 matrix.

We will test with input dimension n = 10^6 (one million) and output dimension d = 400. For the SRTT, we use the discrete cosine transform as our trigonometric transform. For the sparse sign embedding, we use a sparsity parameter \zeta = 8.

Here are the results (timings averaged over 20 trials):

Our conclusions are as follows:

  • Sparse sign embeddings are definitively the fastest to apply, being 3–20× faster than the SRTT and 74–100× faster than Gaussian embeddings.
  • Sparse sign embeddings are modestly slower to construct than the SRTT, but much faster to construct than Gaussian embeddings.

Overall, the conclusion is that sparse sign embeddings are the fastest sketching matrices by a wide margin: For an “end-to-end” workflow involving generating the sketching matrix S \in \real^{400\times 10^6} and applying it to a matrix A\in\real^{10^6\times 200}, sparse sign embeddings are 14× faster than SRTTs and 73× faster than Gaussian embeddings.2More timings are reported in Table 1 of this paper, which I credit for inspiring my enthusiasm for the sparse sign embedding l.

Distortion

Runtime is only one measure of the quality of a sketching matrix; we also must care about the distortion \varepsilon. Fortunately, for practical purposes, Gaussian embeddings, SRTTs, and sparse sign embeddings all tend to have similar distortions. Therefore, we are free to use the sparse sign embeddings, as they as typically are the fastest.

Consider the following test. We generate a sparse random test matrix A of size n\times k for n = 10^5 and k = 50 using the MATLAB sprand function; we set the sparsity level to 1%. We then compare the distortion of Gaussian embeddings, SRTTs, and sparse sign embeddings across a range of sketching dimensions d between 100 and 10,000. We report the distortion averaged over 100 trials. The theoretically predicted value \varepsilon = \sqrt{k/d} (equivalently, d=k/\varepsilon^2) is shown as a dashed line.

To me, I find these results remarkable. All three embeddings exhibit essentially the same distortion parameter predicted by the Gaussian theory.

It would be premature to declare success having only tested on one type of test matrix A. Consider the following four test matrices:

  • Sparse: The test matrix from above.
  • Dense: A\in\real^{10^6\times 50} is taken to be a matrix with independent standard Gaussian random values.
  • Khatri–Rao: A\in\real^{50^3\times 50} is taken to be the Khatri–Rao product of three Haar random orthogonal matrices.
  • Identity: A = \twobyone{I}{0} \in\real^{10^6\times 50} is taken to be the 50\times 50 identity matrix stacked onto a (10^6-50)\times 50 matrix of zeros.

The performance of sparse sign embeddings (again with sparsity parameter \zeta = 8) is shown below:

We see that for the first three test matrices, the performance closely follows the expected value \epsilon = \sqrt{k/d}. However, for the last test matrix “Identity”, we see the distortion begins to slightly exceed this predicted distortion for d/k\ge 20.

To improve sparse sign embeddings for higher values of d/k, we can increase the value of the sparsity parameter \zeta. We recommend

    \[\zeta = \max \left( 8 , \left\lceil 2\sqrt{\frac{d}{k}} \right\rceil \right).\]

With this higher sparsity level, the distortion closely tracks \varepsilon = \sqrt{k/d} for all four test matrices:

Conclusion

Implemented appropriately (see below), sparse sign embeddings can be faster than other sketching matrices by a wide margin. The parameter choice \zeta = 8 is enough to ensure the distortion closely tracks \varepsilon = \sqrt{k/d} for most test matrices. For the toughest test matrices, a slightly larger sparsity parameter \zeta = \max(8, \lceil 2\sqrt{d/k}\rceil) can be necessary to achieve the optimal distortion.

While these tests are far from comprehensive, they are consistent with the uniformly positive results for sparse sign embeddings reported in the literature. We believe that this evidence supports the argument that sparse sign embeddings are a sensible default sketching matrix for most purposes.

Sparse Sign Embeddings: Theory and Practice

Given the highly appealing performance characteristics of sparse sign embeddings, it is worth saying a few more words about these embeddings and how they perform in both theory and practice.

Recall that a sparse sign embedding is a random matrix of the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & \cdots & s_n \end{bmatrix}.\]

Each column s_i is an independent and randomly generated to contain exactly \zeta nonzero entries in uniformly random positions. The value of each nonzero entry of s_i is chosen to be either +1 or -1 with 50/50 odds.

Parameter Choices

The goal of sketching is to reduce vectors of length n to a smaller dimension d. For linear algebra applications, we typically want to preserve all vectors in the column space of a matrix A up to distortion \varepsilon > 0:

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for all }x \in \operatorname{col}(A).\]

To use sparse sign embeddings, we must choose the parameters appropriately:

Given a dimension k and a target distortion \varepsilon, how do we pick d and \zeta?

Based on the experiments above (and other testing reported in the literature), we recommend the following parameter choices in practice:

    \[d = \frac{k}{\varepsilon^2} \quad \text{and} \quad \zeta = \max\left(8,\frac{2}{\varepsilon}\right).\]

The parameter choice \zeta = 8 is advocated by Tropp, Yurtever, Udell, and Cevher; they mention experimenting with parameter values as small as \zeta = 2. The value \zeta = 1 has demonstrated deficiencies and should almost always be avoided (see below). The scaling d \approx k/\varepsilon^2 is derived from the analysis of Gaussian embeddings. As Martinsson and Tropp argue, the analysis of Gaussian embeddings tends to be reasonably descriptive of other well-designed random embeddings.

The best-known theoretical analysis, due to Cohen, suggests more cautious parameter setting for sparse sign embeddings:

    \[d = \mathcal{O} \left( \frac{k \log k}{\varepsilon^2} \right) \quad \text{and} \quad \zeta = \mathcal{O}\left( \frac{\log k}{\varepsilon} \right).\]

The main difference between Cohen’s analysis and the parameter recommendations above is the presence of the \log k factor and the lack of explicit constants in the O-notation.

Implementation

For good performance, it is imperative to store S using either a purpose-built data structure or a sparse matrix format (such as a MATLAB sparse matrix or scipy sparse array).

If a sparse matrix library is unavailable, then either pursue a dedicated implementation or use a different type of embedding; sparse sign embeddings are just as slow as Gaussian embeddings if they are stored in an ordinary non-sparse matrix format!

Even with a sparse matrix format, it can require care to generate and populate the random entries of the matrix S. Here, for instance, is a simple function for generating a sparse sign matrix in MATLAB:

function S = sparsesign_slow(d,n,zeta)
cols = kron((1:n)',ones(zeta,1)); % zeta nonzeros per column
vals = 2*randi(2,n*zeta,1) - 3; % uniform random +/-1 values
rows = zeros(n*zeta,1);
for i = 1:n
   rows((i-1)*zeta+1:i*zeta) = randsample(d,zeta);
end
S = sparse(rows, cols, vals / sqrt(zeta), d, n);
end

Here, we specify the rows, columns, and values of the nonzero entries before assembling them into a sparse matrix using the MATLAB sparse command. Since there are exactly \zeta nonzeros per column, the column indices are easy to generate. The values are uniformly \pm 1/\sqrt{\zeta} and can also be generated using a single line. The real challenge to generating sparse sign embeddings in MATLAB is the row indices, since each batch of \zeta row indices much be chosen uniformly at random between 1 and d without replacement. This is accomplished in the above code by a for loop, generating row indices \zeta at a time using the slow randsample function.

As its name suggests, the sparsesign_slow is very slow. To generate a 200\times 10^7 sparse sign embedding with sparsity \zeta = 8 requires 53 seconds!

Fortunately, we can do (much) better. By rewriting the code in C and directly generating the sparse matrix in the CSC format MATLAB uses, generating this same 200 by 10 million sparse sign embedding takes just 0.4 seconds, a speedup of 130× over the slow MATLAB code. A C implementation of the sparse sign embedding that can be used in MATLAB using the MEX interface can be found in this file in the Github repo for my recent paper.

Other Sketching Matrices

Let’s leave off the discussion by mentioning other types of sketching matrices not considered in the empirical comparison above.

Coordinate Sampling

Another family of sketching matrices that we haven’t talked about are coordinate sampling sketches. A coordinate sampling sketch consists of indices 1\le i_1,\ldots,i_d\le n and weights w_1,\ldots,w_d \in \real. To apply S, we sample the indices i_1,\ldots,i_d and reweight them using the weights:

    \[b \in \real^n \longmapsto Sb = (w_1 b_{i_1},\ldots,w_db_{i_d}) \in \real^d.\]

Coordinate sampling is very appealing: To apply S to a matrix or vector requires no matrix multiplication of trigonometric transforms, just picking out some entries or rows and rescaling them.

In order for coordinate sampling to be effective, we need to pick the right indices. Below, we compare two coordinate sampling sketching approaches, uniform sampling and leverage score sampling (both with replacement), to the sparse sign embedding with the suggested parameter setting \zeta = \max(8,\lceil 2\sqrt{d/k}\rceil) for the hard “Identity” test matrix used above.

We see right away that the uniform sampling fails dramatically on this problem. That’s to be expected. All but 50 of 100,000 rows of A are zero, so picking rows uniformly at random will give nonsense with very high probability. Uniform sampling can work well for matrices A which are “incoherent”, with all rows of A being of “similar importance”.

Conclusion (Uniform sampling). Uniform sampling is a risky method; it works excellently for some problems, but fails spectacularly for others. Use only with caution!

The ridge leverage score sampling method is more interesting. Unlike all the other sketches we’ve discussed in this post, ridge leverage score sampling is data-dependent. First, it computes a leverage score \ell_i for each row of A and then samples rows with probabilities proportional to these scores. For high enough values of d, ridge leverage score sampling performs slightly (but only slightly) worse than the characteristic \varepsilon = \sqrt{k/d} scaling we expect for an oblivious subspace embedding.

Ultimately, leverage score sampling has two disadvantages when compared with oblivious sketching matrices:

  • Higher distortion, higher variance. The distortion of a leverage score sketch is higher on average, and more variable, than an oblivious sketch, which achieve very consistent performance.
  • Computing the leverage scores. In order to implement this sketch, the leverage scores \ell_i have to first be computed or estimated. This is a nontrivial algorithmic problem; the most direct way of computing the leverage scores requires a QR decomposition at \mathcal{O}(nk^2) cost, much higher than other types of sketches.

There are settings when coordinate sampling methods, such as leverage scores, are well-justified:

  • Structured matrices. For some matrices A, the leverage scores might be very cheap to compute or approximate. In such cases, coordinate sampling can be faster than oblivious sketching.
  • “Active learning”. For some problems, each entry of the vector b or row of the matrix A may be expensive to generate. In this case, coordinate sampling has the distinct advantage that computing Sb or SA only requires generating the entries of b or rows of A for the d randomly selected indices i_1,\ldots,i_d.

Ultimately, oblivious sketching and coordinate sampling both have their place as tools in the computational toolkit. For the reasons described above, I believe that oblivious sketching should usually be preferred to coordinate sampling in the absence of a special reason to prefer the latter.

Tensor Random Embeddings

There are a number of sketching matrices with tensor structure; see here for a survey. These types of sketching matrices are very well-suited to tensor computations. If tensor structure is present in your application, I would put these types of sketches at the top of my list for consideration.

CountSketch

The CountSketch sketching matrix is the \zeta = 1 case of the sparse sign embedding. CountSketch has serious deficiencies, and should only be used in practice with extreme care.

Consider the “Identity” test matrix from above but with parameter k = 200, and compare the distortion of CountSketch to the sparse sign embedding with parameters \zeta=2,4,8:

We see that the distortion of the CountSketch remains persistently high at 100% until the sketching dimension d is taken >4300, 20× higher than k.

CountSketch is bad because it requires d to be proportional to k^2/\varepsilon^2 in order to achieve distortion \varepsilon. For all of the other sketching matrices we’ve considered, we’ve only required d to be proportional to k/\varepsilon^2 (or perhaps (k\log k)/\varepsilon^2). This difference between d\propto k^2 for CountSketch and d\propto k for other sketching matrices is a at the root of CountSketch’s woefully bad performance on some inputs.3Here, the symbol \propto is an informal symbol meaning “proportional to”.

The fact that CountSketch requires d\propto k^2 is simple to show. It’s basically a variant on the famous birthday problem. We include a discussion at the end of this post.4In fact, any oblivious sketching matrix with only 1 nonzero entry per column must have d\gtrsim k^2. This is Theorem 16 in the following paper.

There are ways of fixing the CountSketch. For instance, we can use a composite sketch S = S_2 \cdot S_1, where S_1 is a CountSketch of size k^2/\varepsilon^2 \times n and S_2 is a Gaussian sketching matrix of size k/\varepsilon^2 \times k^2/\varepsilon^2.5This construction is from this paper. For most applications, however, salvaging CountSketch doesn’t seem worth it; sparse sign embeddings with even \zeta = 2 nonzeros per column are already way more effective and reliable than a plain CountSketch.

Conclusion

By now, sketching is quite a big field, with dozens of different proposed constructions for sketching matrices. So which should you use? For most use cases, sparse sign embeddings are a good choice; they are fast to construct and apply and offer uniformly good distortion across a range of matrices.

Bonus: CountSketch and the Birthday Problem
The point of this bonus section is to prove the following (informal) theorem:

Let A be the “Identity” test matrix above. If S\in\real^{d\times n} is a CountSketch matrix with output dimension d\ll k^2, then the distortion of S for \operatorname{col}(A) is \varepsilon\ge 1 with high probability.

Let’s see why. By the structure of the matrix A, SA has the form

    \[SA = \begin{bmatrix} s_1 & \cdots & s_k \end{bmatrix}\]

where each vector s_i\in\real^d has a single \pm1 in a uniformly random location j\_i.

Suppose that the indices j_1,\ldots,j_k are not all different from each other, say j_i = j_{i'}. Set x = e_i - e_{i'}, where e_i is the standard basis vector with 1 in position i and zeros elsewhere. Then, (SA)x = 0 but \norm{x} = \sqrt{2}. Thus, for the distortion relation

    \[(1-\varepsilon) \norm{x} =(1-\varepsilon)\sqrt{2} \le 0 = \norm{(SA)x}\]

to hold, \varepsilon \ge 1. Thus,

    \[\prob \{ \varepsilon \ge 1 \} \ge \prob \{ j_1,\ldots,j_k \text{ are not distinct} \}.\]

For a moment, let’s put aside our analysis of the CountSketch, and turn our attention to a famous puzzle, known as the birthday problem:

How many people have to be in a room before there’s at least a 50% chance that two people share the same birthday?

The counterintuitive or “paradoxical” answer: 23. This is much smaller than many people’s intuition, as there are 365 possible birthdays and 23 is much smaller than 365.

The reason for this surprising result is that, in a room of 23 people, there are {23 \choose 2} = 23\cdot 22/2=253 pairs of people. Each pair of people has a 1/365 chance of sharing a birthday, so the expected number of birthdays in a room of 23 people is 253/365 \approx 0.69. Since are 0.69 birthdays shared on average in a room of 23 people, it is perhaps less surprising that 23 is the critical number at which the chance of two people sharing a birthday exceeds 50%.

Hopefully, the similarity between the birthday problem and CountSketch is becoming clear. Each pair of indices j_i and j_{i'} in CountSketch have a 1/d chance of being the same. There are {k\choose 2} \approx k^2/2 pairs of indices, so the expected number of equal indices j_i = j_{i'} is \approx k^2/2d. Thus, we should anticipate d \gtrapprox k^2 is required to ensure that j_1,\ldots,j_k are distinct with high probability.

Let’s calculate things out a bit more precisely. First, realize that

    \[\prob \{ j_1,\ldots,j_k \text{ are not distinct} \} = 1 - \prob \{ j_1,\ldots,j_k \text{ are distinct} \}.\]

To compute the probability that j_1,\ldots,j_k are distinct, imagine introducing each j_i one at a time. Assuming that j_1,\ldots,j_{i-1} are all distinct, the probability j_1,\ldots,j_i are distinct is just the probability that j_i does not take any of the i-1 values j_1,\ldots,j_i. This probability is

    \[\prob\{ j_1,\ldots,j_i \text{ are distinct} \mid j_1,\ldots,j_{i-1} \text{ are distinct}\} = 1 - \frac{i-1}{d}.\]

Thus, by the chain rule for probability,

    \[\prob \{ j_1,\ldots,j_k \text{ are distinct} \} = \prod_{i=1}^k \left(1 - \frac{i-1}{d} \right).\]

To bound this quantity, use the numeric inequality 1-x\le \exp(-x) for every x \in \real, obtaining

    \[\mathbb{P} \{ j_1,\ldots,j_k \text{ are distinct} \} \le \prod_{i=0}^{k-1} \exp\left(-\frac{i}{d}\right) = \exp \left( -\frac{1}{d}\sum_{i=0}^{k-1} i \right) = \exp\left(-\frac{k(k-1)}{2d}\right).\]

Thus, we conclude that

    \[\prob \{ \varepsilon \ge 1 \} \ge 1-\prob \{ j_1,\ldots,j_k \text{ are distinct} \\}\ge 1-\exp\left(-\frac{k(k-1)}{2d}\right).\]

Solving this inequality, we conclude that

    \[\prob\{\varepsilon \ge 1\} \ge \frac{1}{2} \quad \text{if} \quad d \le \frac{k(k-1)}{2\ln 2}.\]

This is a quantitative version of our informal theorem from earlier.

Does Sketching Work?

I’m excited to share that my paper, Fast and forward stable randomized algorithms for linear least-squares problems has been released as a preprint on arXiv.

With the release of this paper, now seemed like a great time to discuss a topic I’ve been wanting to write about for a while: sketching. For the past two decades, sketching has become a widely used algorithmic tool in matrix computations. Despite this long history, questions still seem to be lingering about whether sketching really works:

In this post, I want to take a critical look at the question “does sketching work”? Answering this question requires answering two basic questions:

  1. What is sketching?
  2. What would it mean for sketching to work?

I think a large part of the disagreement over the efficacy of sketching boils down to different answers to these questions. By considering different possible answers to these questions, I hope to provide a balanced perspective on the utility of sketching as an algorithmic primitive for solving linear algebra problems.

Sketching

In matrix computations, sketching is really a synonym for (linear) dimensionality reduction. Suppose we are solving a problem involving one or more high-dimensional vectors b \in \real^n or perhaps a tall matrix A\in \real^{n\times k}. A sketching matrix is a d\times n matrix S \in \real^{d\times n} where d \ll n. When multiplied into a high-dimensional vector b or tall matrix A, the sketching matrix S produces compressed or “sketched” versions Sb and SA that are much smaller than the original vector b and matrix A.

Let \mathsf{E}=\{x_1,\ldots,x_p\} be a collection of vectors. For S to be a “good” sketching matrix for \mathsf{E}, we require that S preserves the lengths of every vector in \mathsf{E} up to a distortion parameter \varepsilon>0:

(1)   \[(1-\varepsilon) \norm{x}\le\norm{Sx}\le(1+\varepsilon)\norm{x} \]

for every x in \mathsf{E}.

For linear algebra problems, we often want to sketch a matrix A. In this case, the appropriate set \mathsf{E} that we want our sketch to be “good” for is the column space of the matrix A, defined to be

    \[\operatorname{col}(A) \coloneqq \{ Ax : x \in \real^k \}.\]

Remarkably, there exist many sketching matrices that achieve distortion \varepsilon for \mathsf{E}=\operatorname{col}(A) with an output dimension of roughly d \approx k / \varepsilon^2. In particular, the sketching dimension d is proportional to the number of columns k of A. This is pretty neat! We can design a single sketching matrix S which preserves the lengths of all infinitely-many vectors Ax in the column space of A.

Sketching Matrices

There are many types of sketching matrices, each with different benefits and drawbacks. Many sketching matrices are based on randomized constructions in the sense that entries of S are chosen to be random numbers. Broadly, sketching matrices can be classified into two types:

  • Data-dependent sketches. The sketching matrix S is constructed to work for a specific set of input vectors \mathsf{E}.
  • Oblivious sketches. The sketching matrix S is designed to work for an arbitrary set of input vectors \mathsf{E} of a given size (i.e., \mathsf{E} has p elements) or dimension (\mathsf{E} is a k-dimensional linear subspace).

We will only discuss oblivious sketching for this post. We will look at three types of sketching matrices: Gaussian embeddings, subsampled randomized trignometric transforms, and sparse sign embeddings.

The details of how these sketching matrices are built and their strengths and weaknesses can be a little bit technical. All three constructions are independent from the rest of this article and can be skipped on a first reading. The main point is that good sketching matrices exist and are fast to apply: Reducing b\in\real^n to Sb\in\real^{d} requires roughly \mathcal{O}(n\log n) operations, rather than the \mathcal{O}(dn) operations we would expect to multiply a d\times n matrix and a vector of length n.1Here, \mathcal{O}(\cdot) is big O notation.

Gaussian Embeddings

The simplest type of sketching matrix S\in\real^{d\times n} is obtained by (independently) setting every entry of S to be a Gaussian random number with mean zero and variance 1/d. Such a sketching matrix is called a Gaussian embedding.2Here, embedding is a synonym for sketching matrix.

Benefits. Gaussian embeddings are simple to code up, requiring only a standard matrix product to apply to a vector Sb or matrix SA. Gaussian embeddings admit a clean theoretical analysis, and their mathematical properties are well-understood.

Drawbacks. Computing Sb for a Gaussian embedding costs \mathcal{O}(dn) operations, significantly slower than the other sketching matrices we will consider below. Additionally, generating and storing a Gaussian embedding can be computationally expensive.

Subsampled Randomized Trigonometric Transforms

The subsampled randomized trigonometric transform (SRTT) sketching matrix takes a more complicated form. The sketching matrix is defined to be a scaled product of three matrices

    \[S = \sqrt{\frac{n}{d}} \cdot R \cdot F \cdot D.\]

These matrices have the following definitions:

  • D\in\real^{n\times n} is a diagonal matrix whose entries are each a random \pm 1 (chosen independently with equal probability).
  • F\in\real^{n\times n} is a fast trigonometric transform such as a fast discrete cosine transform.3One can also use the ordinary fast Fourier transform, but this results in a complex-valued sketch.
  • R\in\real^{d\times n} is a selection matrix. To generate R, let i_1,\ldots,i_d be a random subset of \{1,\ldots,n\}, selected without replacement. R is defined to be a matrix for which Rb = (b_{i_1},\ldots,b_{i_d}) for every vector b.

To store S on a computer, it is sufficient to store the n diagonal entries of D and the d selected coordinates i_1,\ldots,i_d defining R. Multiplication of S against a vector b should be carried out by applying each of the matrices R, F, and D in sequence, such as in the following MATLAB code:

% Generate randomness for S
signs = 2*randi(2,m,1)-3; % diagonal entries of D
idx = randsample(m,d); % indices i_1,...,i_d defining R

% Multiply S against b
c = signs .* b % multiply by D
c = dct(c) % multiply by F
c = c(idx) % multiply by R
c = sqrt(n/d) * c % scale

Benefits. S can be applied to a vector b in \mathcal{O}(n \log n) operations, a significant improvement over the \mathcal{O}(dn) cost of a Gaussian embedding. The SRTT has the lowest memory and random number generation requirements of any of the three sketches we discuss in this post.

Drawbacks. Applying S to a vector requires a good implementation of a fast trigonometric transform. Even with a high-quality trig transform, SRTTs can be significantly slower than sparse sign embeddings (defined below).4For an example, see Figure 2 in this paper. SRTTs are hard to parallelize.5Block SRTTs are more parallelizable, however. In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2, larger than for a Gaussian sketch.

Sparse Sign Embeddings

A sparse sign embedding takes the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & s_2 & \cdots & s_n \end{bmatrix}.\]

Here, each column s_i is an independently generated random vector with exactly \zeta nonzero entries with random \pm 1 values in uniformly random positions. The result is a d\times n matrix with only \zeta \cdot n nonzero entries. The parameter \zeta is often set to a small constant like 8 in practice.6This recommendation comes from the following paper, and I’ve used this parameter setting successfully in my own work.

Benefits. By using a dedicated sparse matrix library, S can be very fast to apply to a vector b (either \mathcal{O}(n) or \mathcal{O}(n\log k) operations) to apply to a vector, depending on parameter choices (see below). With a good sparse matrix library, sparse sign embeddings are often the fastest sketching matrix by a wide margin.

Drawbacks. To be fast, sparse sign embeddings requires a good sparse matrix library. They require generating and storing roughly \zeta n random numbers, higher than SRTTs (roughly n numbers) but much less than Gaussian embeddings (dn numbers). In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2 and the sparsity should be set to \zeta \approx (\log k)/\varepsilon; the theoretically sanctioned sketching dimension (at least according to existing theory) is larger than for a Gaussian sketch. In practice, we can often get away with using d \approx k/\varepsilon^2 and \zeta=8.

Summary

Using either SRTTs or sparse maps, a sketching a vector of length n down to d dimensions requires only \mathcal{O}(n) to \mathcal{O}(n\log n) operations. To apply a sketch to an entire n\times k matrix A thus requires roughly \mathcal{O}(nk) operations. Therefore, sketching offers the promise of speeding up linear algebraic computations involving A, which typically take \mathcal{O}(nk^2) operations.

How Can You Use Sketching?

The simplest way to use sketching is to first apply the sketch to dimensionality-reduce all of your data and then apply a standard algorithm to solve the problem using the reduced data. This approach to using sketching is called sketch-and-solve.

As an example, let’s apply sketch-and-solve to the least-squares problem:

(2)   \[\operatorname*{minimize}_{x\in\real^k} \norm{Ax - b}. \]

We assume this problem is highly overdetermined with A having many more rows n than columns m.

To solve this problem with sketch-and-solve, generate a good sketching matrix S for the set \mathsf{E} = \operatorname{col}(\onebytwo{A}{b}). Applying S to our data A and b, we get a dimensionality-reduced least-squares problem

(3)   \[\operatorname*{minimize}_{\hat{x}\in\real^k} \norm{(SA)\hat{x} - Sb}. \]

The solution \hat{x} is the sketch-and-solve solution to the least-squares problem, which we can use as an approximate solution to the original least-squares problem.

Least-squares is just one example of the sketch-and-solve paradigm. We can also use sketching to accelerate other algorithms. For instance, we could apply sketch-and-solve to clustering. To cluster data points x_1,\ldots,x_p, first apply sketching to obtain Sx_1,\ldots,Sx_p and then apply an out-of-the-box clustering algorithms like k-means to the sketched data points.

Does Sketching Work?

Most often, when sketching critics say “sketching doesn’t work”, what they mean is “sketch-and-solve doesn’t work”.

To address this question in a more concrete setting, let’s go back to the least-squares problem (2). Let x_\star denote the optimal least-squares solution and let \hat{x} be the sketch-and-solve solution (3). Then, using the distortion condition (1), one can show that

    \[\norm{A\hat{x} - b} \le \frac{1+\varepsilon}{1-\varepsilon} \norm{Ax - b}.\]

If we use a sketching matrix with a distortion of \varepsilon = 1/3, then this bound tells us that

(4)   \[\norm{A\hat{x} - b} \le 2\norm{Ax_\star - b}. \]

Is this a good result or a bad result? Ultimately, it depends. In some applications, the quality of a putative least-squares solution \hat{x} is can be assessed from the residual norm \norm{A\hat{x} - b}. For such applications, the bound (4) ensures that \norm{A\hat{x} - b} is at most twice \norm{Ax_\star-b}. Often, this means \hat{x} is a pretty decent approximate solution to the least-squares problem.

For other problems, the appropriate measure of accuracy is the so-called forward error \norm{\hat{x} - x_\star}, measuring how close \hat{x} is to x_\star. For these cases, it is possible that \norm{\hat{x} - x_\star} might be large even though the residuals are comparable (4).

Let’s see an example, using the MATLAB code from my paper:

[A, b, x, r] = random_ls_problem(1e4, 1e2, 1e8, 1e-4); % Random LS problem
S = sparsesign(4e2, 1e4, 8); % Sparse sign embedding
sketch_and_solve = (S*A) \ (S*b); % Sketch-and-solve
direct = A \ b; % MATLAB mldivide

Here, we generate a random least-squares problem of size 10,000 by 100 (with condition number 10^8 and residual norm 10^{-4}). Then, we generate a sparse sign embedding of dimension d = 400 (corresponding to a distortion of roughly \varepsilon \approx 1/2). Then, we compute the sketch-and-solve solution and, as reference, a “direct” solution by MATLAB’s \.

We compare the quality of the sketch-and-solve solution to the direct solution, using both the residual and forward error:

fprintf('Residuals: sketch-and-solve %.2e, direct %.2e, optimal %.2e\n',...
           norm(b-A*sketch_and_solve), norm(b-A*direct), norm(r))
fprintf('Forward errors: sketch-and-solve %.2e, direct %.2e\n',...
           norm(x-sketch_and_solve), norm(x-direct))

Here’s the output:

Residuals: sketch-and-solve 1.13e-04, direct 1.00e-04, optimal 1.00e-04
Forward errors: sketch-and-solve 1.06e+03, direct 8.08e-07

The sketch-and-solve solution has a residual norm of 1.13\times 10^{-4}, close to direct method’s residual norm of 1.00\times 10^{-4}. However, the forward error of sketch-and-solve is 1\times 10^3 nine orders of magnitude larger than the direct method’s forward error of 8\times 10^{-7}.

Does sketch-and-solve work? Ultimately, it’s a question of what kind of accuracy you need for your application. If a small-enough residual is all that’s needed, then sketch-and-solve is perfectly adequate. If small forward error is needed, sketch-and-solve can be quite bad.

One way sketch-and-solve can be improved is by increasing the sketching dimension d and lowering the distortion \varepsilon. Unfortunately, improving the distortion of the sketch is expensive. Because of the relation d \approx k /\varepsilon^2, to decrease the distortion by a factor of ten requires increasing the sketching dimension d by a factor of one hundred! Thus, sketch-and-solve is really only appropriate when a low degree of distortion \varepsilon is necessary.

Iterative Sketching: Combining Sketching with Iteration

Sketch-and-solve is a fast way to get a low-accuracy solution to a least-squares problem. But it’s not the only way to use sketching for least-squares. One can also use sketching to obtain high-accuracy solutions by combining sketching with an iterative method.

There are many iterative methods for least-square problems. Iterative methods generate a sequence of approximate solutions x_1,x_2,\ldots that we hope will converge at a rapid rate to the true least-squares solution, x_\star.

To using sketching to solve least-squares problems iteratively, we can use the following observation:

If S is a sketching matrix for \mathsf{E} = \operatorname{col}(A), then (SA)^\top SA \approx A^\top A.

Therefore, if we compute a QR factorization

    \[SA = QR,\]

then

    \[A^\top A \approx (SA)^\top (SA) = R^\top Q^\top Q R = R^\top R.\]

Notice that we used the fact that Q^\top Q = I since Q has orthonormal columns. The conclusion is that R^\top R \approx A^\top A.

Let’s use the approximation R^\top R \approx A^\top A to solve the least-squares problem iteratively. Start off with the normal equations7As I’ve described in a previous post, it’s generally inadvisable to solve least-squares problems using the normal equations. Here, we’re just using the normal equations as a conceptual tool to derive an algorithm for solving the least-squares problem.

(5)   \[(A^\top A)x = A^\top b. \]

We can obtain an approximate solution to the least-squares problem by replacing A^\top A by R^\top R in (5) and solving. The resulting solution is

    \[x_0 = R^{-1} (R^{-\top}(A^\top b)).\]

This solution x_0 will typically not be a good solution to the least-squares problem (2), so we need to iterate. To do so, we’ll try and solve for the error x - x_0. To derive an equation for the error, subtract A^\top A x_0 from both sides of the normal equations (5), yielding

    \[(A^\top A)(x-x_0) = A^\top (b-Ax_0).\]

Now, to solve for the error, substitute R^\top R for A^\top A again and solve for x, obtaining a new approximate solution x_1:

    \[x\approx x_1 \coloneqq x_0 + R^{-\top}(R^{-1}(A^\top(b-Ax_0))).\]

We can now go another step: Derive an equation for the error x-x_1, approximate A^\top A \approx R^\top R, and obtain a new approximate solution x_2. Continuing this process, we obtain an iteration

(6)   \[x_{i+1} = x_i + R^{-\top}(R^{-1}(A^\top(b-Ax_i))).\]

This iteration is known as the iterative sketching method.8The name iterative sketching is for historical reasons. Original versions of the procedure involved taking a fresh sketch S_iA = Q_iR_i at every iteration. Later, it was realized that a single sketch SA suffices, albeit with a slower convergence rate. Typically, only having to sketch and QR factorize once is worth the slower convergence rate. If the distortion is small enough, this method converges at an exponential rate, yielding a high-accuracy least squares solution after a few iterations.

Let’s apply iterative sketching to the example we considered above. We show the forward error of the sketch-and-solve and direct methods as horizontal dashed purple and red lines. Iterative sketching begins at roughly the forward error of sketch-and-solve, with the error decreasing at an exponential rate until it reaches that of the direct method over the course of fourteen iterations. For this problem, at least, iterative sketching gives high-accuracy solutions to the least-squares problem!

To summarize, we’ve now seen two very different ways of using sketching:

  • Sketch-and-solve. Sketch the data A and b and solve the sketched least-squares problem (3). The resulting solution \hat{x} is cheap to obtain, but may have low accuracy.
  • Iterative sketching. Sketch the matrix A and obtain an approximation R^\top R = (SA)^\top (SA) to A^\top A. Use the approximation R^\top R to produce a sequence of better-and-better least-squares solutions x_i by the iteration (6). If we run for enough iterations q, the accuracy of the iterative sketching solution x_q can be quite high.

By combining sketching with more sophisticated iterative methods such as conjugate gradient and LSQR, we can get an even faster-converging least-squares algorithm, known as sketch-and-precondition. Here’s the same plot from above with sketch-and-precondition added; we see that sketch-and-precondition converges even faster than iterative sketching does!

“Does sketching work?” Even for a simple problem like least-squares, the answer is complicated:

A direct use of sketching (i.e., sketch-and-solve) leads to a fast, low-accuracy solution to least-squares problems. But sketching can achieve much higher accuracy for least-squares problems by combining sketching with an iterative method (iterative sketching and sketch-and-precondition).

We’ve focused on least-squares problems in this section, but these conclusions could hold more generally. If “sketching doesn’t work” in your application, maybe it would if it was combined with an iterative method.

Just How Accurate Can Sketching Be?

We left our discussion of sketching-plus-iterative-methods in the previous section on a positive note, but there is one last lingering question that remains to be answered. We stated that iterative sketching (and sketch-and-precondition) converge at an exponential rate. But our computers store numbers to only so much precision; in practice, the accuracy of an iterative method has to saturate at some point.

An (iterative) least-squares solver is said to be forward stable if, when run for a sufficient number q of iterations, the final accuracy \norm{x_q - x_\star} is comparable to accuracy of a standard direct method for the least-squares problem like MATLAB’s \ command or Python’s scipy.linalg.lstsq. Forward stability is not about speed or rate of convergence but about the maximum achievable accuracy.

The stability of sketch-and-precondition was studied in a recent paper by Meier, Nakatsukasa, Townsend, and Webb. They demonstrated that, with the initial iterate x_0 = 0, sketch-and-precondition is not forward stable. The maximum achievable accuracy was worse than standard solvers by orders of magnitude! Maybe sketching doesn’t work after all?

Fortunately, there is good news:

  • The iterative sketching method is provably forward stable. This result is shown in my newly released paper; check it out if you’re interested!
  • If we use the sketch-and-solve method as the initial iterate x_0 = \hat{x} for sketch-and-precondition, then sketch-and-precondition appears to be forward stable in practice. No theoretical analysis supporting this finding is known at present.9For those interested, neither iterative sketching nor sketch-and-precondition are backward stable, which is a stronger stability guarantee than forward stability. Fortunately, forward stability is a perfectly adequate stability guarantee for many—but not all—applications.

These conclusions are pretty nuanced. To see what’s going, it can be helpful to look at a graph:10For another randomly generated least-squares problem of the same size with condition number 10^{10} and residual 10^{-6}.

The performance of different methods can be summarized as follows: Sketch-and-solve can have very poor forward error. Sketch-and-precondition with the zero initialization x_0 = 0 is better, but still much worse than the direct method. Iterative sketching and sketch-and-precondition with x_0 = \hat{x} fair much better, eventually achieving an accuracy comparable to the direct method.

Put more simply, appropriately implemented, sketching works after all!

Conclusion

Sketching is a computational tool, just like the fast Fourier transform or the randomized SVD. Sketching can be used effectively to solve some problems. But, like any computational tool, sketching is not a silver bullet. Sketching allows you to dimensionality-reduce matrices and vectors, but it distorts them by an appreciable amount. Whether or not this distortion is something you can live with depends on your problem (how much accuracy do you need?) and how you use the sketch (sketch-and-solve or with an iterative method).

The Hard Way to Prove Jensen’s Inequality

In this post, I want to discuss a very beautiful piece of mathematics I stumbled upon recently. As a warning, this post will be more mathematical than most, but I will still try and sand off the roughest mathematical edges. This post is adapted from a much more comprehensive post by Paata Ivanishvili. My goal is to distill the main idea to its essence, deferring the stochastic calculus until it cannot be avoided.

Jensen’s inequality is one of the most important results in probability.

Jensen’s inequality. Let X be a (real) random variable and f:\real\to\real a convex function such that both \mathbb{E} X and \mathbb{E} f(X) are defined. Then f(\mathbb{E}X) \le \mathbb{E} [f(X)].

Here is the standard proof. A convex function has supporting lines. That is, at a point a \in \real, there exists a slope m such that m(x-a) + f(a) \le f(x) for all x \in \real. Invoke this result at a = \mathbb{E} X and x = X and take expectations to conclude that

    \[\mathbb{E}[m(X - \mathbb{E}X) + f(\mathbb{E}X)] = f(\mathbb{E}X) \le \mathbb{E} [f(X)].\]

In this post, I will outline a proof of Jensen’s inequality which is much longer and more complicated. Why do this? This more difficult proof illustrates an incredible powerful technique for proving inequalities, interpolation. The interpolation method can be used to prove a number of difficult and useful inequalities in probability theory and beyond. As an example, at the end of this post, we will see the Gaussian Jensen inequality, a striking generalization of Jensen’s inequality with many applications.

The idea of interpolation is as follows: Suppose I wish to prove A_0 \le A_1 for two numbers A_0 and A_1. This may hard to do directly. With the interpolation method, I first construct a family of numbers A_t, 0 \le t \le 1, such that A_{t = 0} = A_0 and A_{t=1} = A_1 and show that (A_t : 0\le t\le 1) is (weakly) increasing in t. This is typically accomplished by showing the derivative is nonnegative:

    \[\frac{d}{dt} A_t \ge 0.\]

To prove Jensen’s inequality by interpolation, we shall begin with a special case. As often in probability, the simplest case is that of a Gaussian random variable.

Jensen’s inequality for a Gaussian. Let X be a standard Gaussian random variable (i.e., mean-zero and variance 1) and let f:\real\to\real be a thrice-differentiable convex function satisfying a certain technical condition.1Specifically, we assume the regularity condition \mathbb{E} (f'''(G))^p < +\infty for some p > 1 for any Gaussian random variable G. Then

    \[f(0) \le \mathbb{E} [f(X)].\]

Note that the conclusion is exactly Jensen’s inequality, as we have assumed X is mean-zero.

The difficulty with any proof by interpolation is to come up with the “right” A_t. For us, the “right” answer will take the form

    \[A_t = \mathbb{E} [ f(X_t) ],\]

where X_0 = 0 starts with no randomness and X_1 = X is our standard Gaussian. To interpolate between these extremes, we increase the variance linearly from 0 to 1. Thus, we define

    \[A_t = \mathbb{E} [ f(X_t)] \quad \text{where $X_t\sim\mathcal{N}(0,t)$}.\]

Here, and throughout, \mathcal{N}(0,v) denotes a Gaussian random variable with zero mean and variance v.

Let’s compute the derivative of A_t. To do this, let \delta > 0 denote a small parameter which we will later send to zero. For us, the key fact will be that a \mathcal{N}(0,t+\delta) can be realized as a sum of independent \mathcal{N}(0,t) and \mathcal{N}(0,\delta) random variables. Therefore, we write

    \[X_{t+\delta} = X_t + \Delta \quad \text{where $\Delta \sim \mathcal{N}(0,\delta)$ is independent of $X_t$.}\]

We now evaluate f(X_t+\Delta) by using Taylor’s formula

(1)   \[f(X_t+\Delta) = f(X_t) + f'(X_t)\Delta + \frac{1}{2} f''(X_t) \Delta^2 + \frac{1}{6} f'''(\xi) \Delta^3, \]

where \xi lies between X_t and X_t+\Delta. Now, take expectations,

    \[\mathbb{E}[ f(X_t+\Delta)]=\mathbb{E}[f(X_t)] + \mathbb{E}[f'(X_t)\Delta] + \frac{1}{2} \mathbb{E}[f''(X_t)] \mathbb{E}[\Delta^2] + \underbrace{\frac{1}{6} \mathbb{E}[f'''(\xi) \Delta^3]}_{:=\mathrm{Rem}(\delta)}.\]

The random variable \Delta has mean zero and variance \delta so this gives

    \[\mathbb{E} [f(X_t+\Delta)]=\mathbb{E}[f(X_t)] + \delta \frac{1}{2} \mathbb{E}[f''(X_t)]  + \mathrm{Rem}(\delta).\]

As we show below, the remainder term \mathrm{Rem}(\delta)/\delta vanishes as \delta\to 0. Thus, we can rearrange this expression to compute the derivative:

    \[\frac{d}{dt} A_t = \lim_{\delta \downarrow 0} \frac{\mathbb{E} f(X_t+\Delta)-\mathbb{E}[f(X_t)]}{\delta} = \lim_{\delta \downarrow 0} \frac{1}{2} \mathbb{E}[f''(X_t)] + \frac{\mathrm{Rem}(\delta)}{\delta} =  \frac{1}{2} \mathbb{E}[f''(X_t)].\]

The second derivative of a convex function is nonnegative: f''(x) \ge 0 for every x. Therefore,

    \[\frac{d}{dt} A_t \ge 0 \quad \text{for all } t\in [0,1].\]

Jensen’s inequality is proven! In fact, we’ve proven the stronger version of Jensen’s inequality:

    \[\mathbb{E} f(X) = f(0) + \frac{1}{2} \int_0^1 \mathbb{E} [f''(X_t)] \, dt.\]

This strengthened version can yield improvements. For instance, if f is \beta-smooth

    \[f''(x) \le \beta \quad \text{for every } x \in \real,\]

then we have

    \[f(0) \le \mathbb{E} f(X) \le f(0) + \frac{1}{2}\beta.\]

This inequality isn’t too hard to prove directly, but it does show that we’ve obtained something more than the simple proof of Jensen’s inequality.

Analyzing the Remainder Term
Let us quickly check that the remainder term vanishes \mathrm{Rem}(\delta)/\delta as \delta \to 0. Let’s do this. As an exercise, you can verify that our technical regularity condition implies \mathbb{E} |f'''(\xi)|^p < +\infty. Thus, by Hölder’s inequality and setting q to be p‘s Hölder conjugate (1/p = 1/q = 1), we obtain

    \[\frac{|\mathrm{Rem}(\delta)|}{\delta} = \frac{|\mathbb{E}[f'''(\xi) \Delta^3]|}{6\delta} \le  \frac{(|\mathbb{E} |f'''(\xi)|^p)^{1/p}| (\mathbb{E} |\Delta|^{3q})^{1/q}}{6\delta}.\]


One can show that (\mathbb{E} |\Delta|^{3q})^{1/q} \le C(q) \delta^{3/2} where C(q) is a function of q alone. Therefore, |\mathrm{Rem}(\delta)|/\delta \le \mathrm{constant} \cdot \delta^{1/2} \to 0 as \delta \downarrow 0.

What’s Really Going On Here?

In our proof, we use a family of random variables X_t \sim \mathcal{N}(0,t), defined for each 0\le t \le 1. Rather than treating these quantities as independent, we can think of them as a collective, comprising a random function t \mapsto X_t known as a Brownian motion.

The Brownian motion is a very natural way of interpolating between a constant \mu and a Gaussian with mean \mu.2The Ornstein–Uhlenbeck process is another natural way of interpolating between a random variable and a Gaussian.

There is an entire subject known as stochastic calculus which allows us to perform computations with Brownian motion and other random processes. The rules of stochastic calculus can seem bizarre at first. For a function f of a real number x, we often write

    \[df = f'(x) \, dx\]

For a function f(X_t) of a Brownian motion, the analog is Itô’s formula

    \[df = f'(X_t) \, dX_t + \frac{1}{2} f''(X_t) \, dt.\]

While this might seem odd at first, this formula may seem more sensible if we compare with (1) above. The idea, very roughly, is that for an increment of the Brownian motion dX_t over a time interval dt, (dX_t)^2 is a random variable with mean dt, so we cannot drop the second term in the Taylor series, even up to first order in dt. Fully diving into the subtleties of stochastic calculus is far beyond the scope of this short post. Hopefully, the rest of this post, which outlines some extensions of our proof of Jensen’s inequality that require more stochastic calculus, will serve as an enticement to learn more about this beautiful subject.

Proving Jensen by Interpolation

For the rest of this post, we will be less careful with mathematical technicalities. We can use the same idea that we used to prove Jensen’s inequality for a Gaussian random variable to prove Jensen’s inequality for any random variable Y:

    \[f(\mathbb{E}Y) \le \mathbb{E}[f(Y)].\]

Here is the idea of the proof.

First, realize that we can write any random variable Y as a function of a standard Gaussian random variable X. Indeed, letting F_X and F_Y denote the cumulative distribution functions of X and Y, one can show that

    \[g(X) := \inf \{ \alpha \in \real : F_Y(\alpha) \ge F_X(X) \}\]

has the same distribution as Y.

Now, as before, we can interpolate between \mathbb{E} Y and Y using a Brownian motion. As a first, idea, we might try

    \[A_t \stackrel{?}{=} \mathbb{E} [f(g(X_t))].\]

Unfortunately, this choice of A_t does not work! Indeed, A_0 = \mathbb{E}[f(g(0))] does not even equal to \mathbb{E} [f(Y)]! Instead, we must define

    \[A_t = \mathbb{E} [f(\mathbb{E}[g(X_1) \mid X_t])].\]

We define A_t using the conditional expectation of the final value g(X_1) conditional on the Brownian motion X_t at an earlier time t. Using a bit of elbow grease and stochastic calculus, one can show that

    \[\frac{d}{dt} A_t \ge 0 \quad \text{for all }t\in [0,1].\]

This provides a proof of Jensen’s inequality in general by the method if interpolation.

Gaussian Jensen Inequality

Now, we’ve come to the real treat, the Gaussian Jensen inequality. In the last section, we saw the sketch of a proof of Jensen’s inequality using interpolation. While it is cool that this proof is possible, we learned anything new since we can prove Jensen’s inequality in other ways. The Gaussian Jensen inequality provides an application of this technique which is hard to prove other ways. This section, in particular, is cribbing quite heavily from Paata Ivanishvili‘s excellent post on the topic.

Here’s the big question:

If Y_1,\ldots,Y_n are “somewhat dependent”, for which functions does the multivariate Jensen’s inequality

(\star)   \[f(\mathbb{E} Y_1,\ldots,\mathbb{E}Y_n) \le \mathbb{E} [f(Y_1,\ldots,Y_n)] \]

hold?

Considering extreme cases, if Y_1,\ldots,Y_n are entirely dependent, then we would only expect (\star) to hold when f is convex. But if Y_1,\ldots,Y_n are independent, then we can apply Jensen’s inequality to each coordinate one at a time to deduce

    \[\text{($\star$) holds if $f$ is convex in each coordinate, separately.}\]

We would like a result which interpolates between extremes {fully dependent, fully convex} and {independent, separately convex}. The Gaussian Jensen inequality provides exactly this tool.

As in the previous section, we can generate arbitrary random variables Y_1,\ldots,Y_n as functions g(X_1),\ldots,g(X_n) of Gaussian random variables X_1,\ldots,X_n. We will use the covariance matrix \Sigma of the Gaussian random variables X_1,\ldots,X_n as our measure of the dependence of the random variables Y_1,\ldots,Y_n. With this preparation in place, we have the following result:

Gaussian Jensen inequality. The conclusion of Jensen’s inequality

(2)   \[f(\mathbb{E}g_1(X_1),\ldots,\mathbb{E}g_n(X_n)) \le \mathbb{E} [f(g(X_1),\ldots,g(X_n))]\]

holds for all test functions g_1,\ldots,g_n if and only if

    \[\Sigma \circ \nabla^2 f(x) \text{ is positive semidefinite} \quad \text{for all $x \in \real^n$}.\]

Here, \nabla^2 f(x) is the Hessian matrix at x and \circ denotes the entrywise product of matrices.

This is a beautiful result with striking consequences (see Ivanishvili‘s post). The proof is essentially the same as the proof as Jensen’s inequality by interpolation with a little additional bookkeeping.

Let us confirm this result respects our extreme cases. In the case where X_1=\cdots=X_n are equal (and variance one), \Sigma is a matrix of all ones and \Sigma \circ \nabla^2 f(x) = \nabla^2 f(x) for all x. Thus, the Gaussian Jensen inequality states that (2) holds if and only if \nabla^2 f(x) is positive semidefinite for every x, which occurs precisely when f is convex.

Next, suppose that X_1,\ldots,X_n are independent and variance one, then \Sigma is the identity matrix and

    \[\Sigma \circ \nabla^2 f(x) = \mathrm{diag} \left( \frac{\partial^2 f}{\partial x_i^2} : i=1,\ldots,n \right).\]

A diagonal matrix is positive semidefinite if and only if its entries are nonnegative. Thus, (2) holds if and only if each of f‘s diagonal second derivatives are nonnegative \partial_{x_i}^2 f \ge 0: this is precisely the condition for f to be separately convex in each argument.

There’s much more to be said about the Gaussian Jensen inequality, and I encourage you to read Ivanishvili‘s post to see the proof and applications. What I find so compelling about this result—so compelling that I felt the need to write this post—is how interpolation and stochastic calculus can be used to prove inequalities which don’t feel like stochastic calculus problems. The Gaussian Jensen inequality is a statement about functions of dependent Gaussian random variables; there’s nothing dynamic happening. Yet, to prove this result, we inject dynamics into the problem, viewing the two sides of our inequality as endpoints of a random process connecting them. This is a such a beautiful idea that I couldn’t help but share it.

Stochastic Trace Estimation

I am delighted to share that me, Joel A. Tropp, and Robert J. Webber‘s paper XTrace: Making the Most of Every Sample in Stochastic Trace Estimation has recently been released as a preprint on arXiv. In it, we consider the implicit trace estimation problem:

Implicit trace estimation problem: Given access to a square matrix A via the matrix–vector product operation \omega \mapsto A\omega, estimate its trace \tr A = \sum_{i=1}^n A_{ii}.

Algorithms for this task have many uses such as log-determinant computations in machine learning, partition function calculations in statistical physics, and generalized cross validation for smoothing splines. I described another application to counting triangles in a large network in a previous blog post.

Our paper presents new trace estimators XTrace and XNysTrace which are highly efficient, producing accurate trace approximations using a small budget of matrix–vector products. In addition, these algorithms are fast to run and are supported by theoretical results which explain their excellent performance. I really hope that you will check out the paper to learn more about these estimators!

For the rest of this post, I’m going to talk about the most basic stochastic trace estimation algorithm, the GirardHutchinson estimator. This seemingly simple algorithm exhibits a number of nuances and forms the backbone for more sophisticated trace estimates such as Hutch++, Nyström++, XTrace, and XNysTrace. Toward the end, this blog post will be fairly mathematical, but I hope that the beginning will be fairly accessible to all.

Girard–Hutchinson Estimator: The Basics

The GirardHutchinson estimator for the trace of a square matrix A is

(1)   \[\hat{\tr} = \frac{1}{m} \sum_{i=1}^m \omega_i^* A \omega_i. \]

Here, \omega_1,\ldots,\omega_m are random vectors, usually chosen to be statistically independent, and {}^* denotes the conjugate transpose of a vector or matrix. The Girard–Hutchinson estimator only depends on the matrix A through the matrix–vector products A\omega_1,\ldots,A\omega_m.

Unbiasedness

Provided the random vectors are isotropic

(2)   \[\mathbb{E} [\omega_i\omega_i^*] = I, \]

the Girard–Hutchinson estimator is unbiased:

(3)   \[\mathbb{E} [\hat{\tr}] = \tr A.\]

Let us confirm this claim in some detail. First, we use linearity of expectation to evaluate

(4)   \[\mathbb{E} [\hat{\tr}] = \mathbb{E} \left[ \frac{1}{m} \sum_{i=1}^m \omega_i^*A\omega_i \right] = \frac{1}{m} \sum_{i=1}^m \mathbb{E} \left[ \omega_i^* A \omega_i\right]. \]

Therefore, to prove that \mathbb{E} [\hat{\tr}] = \tr A, it is sufficient to prove that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for each i.

When working with traces, there are two tricks that solve 90% of derivations. The first trick is that, if we view a number as a 1\times 1 matrix, then a number equals its trace, x = \tr x. The second trick is the cyclic property: For a k\times p matrix B and a p\times k matrix C, we have \tr (BC) = \tr (CB). The cyclic property should be handled with care when one works with a product of three or more matrices. For instance, we have

    \[\tr[BCD] = \tr[(BC)D] = \tr[D(BC)] = \tr[DBC].\]

However,

    \[\tr [BCD] \ne \tr[CBD] \quad \text{in general}.\]

One should think of the matrix product BCD as beads on a closed loop of string. One can move the last bead D to the front of the other two, \tr [BCD] = \tr[DBC], but not interchange two beads, \tr[BCD] \ne \tr[CBD].

With this trick in hand, let’s return to proving that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for every i. Apply our two tricks:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right].\]

The expectation is a linear operation and the matrix A is non-random, so we can bring the expectation into the trace as

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ).\]

Invoke the isotropy condition (2) and conclude:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ) = \tr(A\cdot I) = \tr A.\]

Plugging this into (4) confirms the unbiasedness claim (3).

Variance

Continue to assume that the \omega_i‘s are isotropic (3) and now assume that \omega_1,\ldots,\omega_m are independent. By independence, the variance can be written as

    \[\Var(\hat{\tr}) = \frac{1}{m^2} \sum_{i=1}^m \Var(\omega_i^*A\omega_i).\]

Assuming that \omega_1,\ldots,\omega_m are identically distributed \omega_1,\ldots,\omega_m \sim \omega, we then get

    \[\Var(\hat{\tr}) = \frac{1}{m} \Var(\omega^*A\omega).\]

The variance decreases like 1/m, which is characteristic of Monte Carlo-type algorithms. Since \hat{\tr} is unbiased (i.e, (3)), this means that the mean square error decays like 1/m so the average error (more precisely root-mean-square error) decays like

    \[\left| \hat{\tr} - \tr A \right| \lessapprox \frac{\mathrm{const}}{\sqrt{m}}.\]

This type of convergence is very slow. If I want to decrease the error by a factor of 10, I must do 100\times the work!

Variance-reduced trace estimators like Hutch++ and our new trace estimator XTrace improve the rate of convergence substantially. Even in the worst case, Hutch++ and XTrace reduce the variance at a rate 1/m^2 and (root-mean-square) error at rates 1/m:

    \[\Var(\hat{\tr}_{\text{H++ or X}}) \le \frac{\mathrm{const}}{m^2},\quad \left| \hat{\tr}_{\text{H++ or X}} - \tr A \right| \lessapprox \frac{\mathrm{const}}{m}.\]

For matrices with rapidly decreasing singular values, the variance and error can decrease much faster than this.

Variance Formulas

As the rate of convergence for the Girard–Hutchinson estimator is so slow, it is imperative to pick a distribution on test vectors \omega that makes the variance of the single–sample estimate \omega^*A\omega as low as possible. In this section, we will provide several explicit formulas for the variance of the Girard–Hutchinson estimator. Derivations of these formulas will appear at the end of this post. These variance formulas help illuminate the benefits and drawbacks of different test vector distributions.

To express the formulas, we will need some notation. For a complex number z = a + bi we use \Re(z) = a and \Im(z) = b to denote the real and imaginary parts. The variance of a random complex number z is

    \[\Var(z) := \mathbb{E} |z - \mathbb{E} z|^2 = \Var(\Re z) + \Var(\Im z).\]

The Frobenius norm of a matrix A is

    \[\left\|A\right\|_{\rm F}^2 = \sum_{i,j} |A_{ij}|^2.\]

If A is real symmetric or complex Hermitian with (real) eigenvalues \lambda_1,\ldots,\lambda_n, we have

(5)   \[\left\|A\right\|_{\rm F}^2 = \sum_{i=1}^n \lambda_i^2. \]

A^\top denotes the ordinary transpose of A and A^* denotes the conjugate transpose of A.

Real-Valued Test Vectors

We first focus on real-valued test vectors \omega. Since \omega is real, we can use the ordinary transpose {}^\top rather than the conjugate transpose {}^*. Since \omega^\top A\omega is a number, it is equal to its own transpose:

    \[\omega^\top A \omega = (\omega^\top A \omega)^\top = \omega^\top A^\top \omega.\]

Therefore,

    \[\omega^\top A\omega = \frac{\omega^\top A \omega + \omega^\top A^\top \omega}{2} = \omega^\top \left( \frac{A + A^\top}{2} \right)\omega.\]

The Girard–Hutchinson trace estimator applied to A is the same as the Girard–Hutchinson estimator applied to the symmetric part of A, (A+A^\top)/2.

For the following results, assume A is symmetric, A = A^\top.

  1. Real Gaussian: \omega_1,\ldots,\omega_m are independent standard normal random vectors.

        \[\Var(\omega^\top A\omega) = 2 \left\|A\right\|_{\rm F}^2.\]

  2. Uniform signs (Rademachers): \omega_1,\ldots,\omega_m are independent random vectors with uniform \pm 1 coordinates.

        \[\Var(\omega^\top A \omega) = 2\sum_{i\ne j} |A_{ij}|^2.\]

  3. Real sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the real sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \mathbb{R}^n : x^\top x = n\}.

        \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

These formulas continue to hold for nonsymmetric A by replacing A by its symmetric part (A+A^\top)/2 on the right-hand sides of these variance formulas.

Complex-Valued Test Vectors

We now move our focus to complex-valued test vectors \omega. As a rule of thumb, one should typically expect that the variance for complex-valued test vectors applied to a real symmetric matrix A is about half the natural real counterpart—e.g., for complex Gaussians, you get about half the variance than with real Gaussians.

A square complex matrix has a Cartesian decomposition

    \[A = A^{\rm H} + i A^{\rm SH}\]

where

    \[A^{\rm H} = \frac{A+A^*}{2} ,\quad A^{\rm SH} = \frac{A - A^*}{2i}\]

denote the Hermitian and skew-Hermitian parts of A. Similar to how the imaginary part of a complex number is real, the skew-Hermitian part of a complex matrix is Hermitian (and i A^{\rm SH} is skew-Hermitian). Since A^{\rm H} and A^{\rm SH} are both Hermitian, we have

    \[\Re(\omega^* A\omega) = \omega^* A^{\rm H} \omega, \quad \Im (\omega^* A \omega) = \omega^* A^{\rm SH} \omega.\]

Consequently, the variance of \omega^*A \omega can be broken into Hermitian and skew-Hermitian parts:

    \[\Var(\omega^* A\omega) = \Var(\omega^* A^{\rm H}\omega) + \Var(\omega^* A^{\rm SH}\omega).\]

For this reason, we will state the variance formulas only for Hermitian A, with the formula for general A following from the Cartesian decomposition.

For the following results, assume A is Hermitian, A = A^*.

  1. Complex Gaussian: \omega_1,\ldots,\omega_m are independent standard complex random vectors, i.e., each \omega_i has iid entries distributed as (g_1+ig_2)/\sqrt{2} for g_1,g_2 standard normal random variables.

        \[\Var(\omega^* A\omega) = \left\|A\right\|_{\rm F}^2.\]

  2. Uniform phases (Steinhauses): \omega_1,\ldots,\omega_m are independent random vectors whose entries are uniform on the complex unit circle \{ z \in \complex : |z| \}.

        \[\Var(\omega^* A \omega) = \sum_{i\ne j} |A_{ij}|^2.\]

  3. Complex sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the complex sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \complex^n : x^* x = n\}.

        \[\Var(\omega^* A\omega) = \frac{n}{n+1} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

Optimality Properties

Let us finally address the question of what the best choice of test vectors is for the Girard–Hutchinson estimator. We will state two results with different restrictions on \omega_1,\ldots,\omega_m.

Our first result, due to Hutchinson, is valid for real symmetric matrices with real test vectors.

Optimality (independent test vectors with independent coordinates). If the test vectors \omega_1,\ldots,\omega_m \in \mathbb{R}^n are isotropic (2), independent from each other, and have independent entries, then for any fixed real symmetric matrix A, the minimum variance for \hat{\tr} is obtained when \omega_1,\ldots,\omega_m are populated with random signs (\omega_i)_j \sim \textnormal{Uniform} \{\pm 1\}.

The next optimality results will have real and complex versions. To present the results for \mathbb{R}-valued and an \complex-valued test vectors on unified footing, let \field denote either \mathbb{R} or \complex. We let a \field-Hermitian matrix be either a real symmetric matrix (if \field = \mathbb{R}) or a complex Hermitian matrix (if \field = \complex). Let a \field-unitary matrix be either a real orthogonal matrix (if \field = \mathbb{R}) or a complex unitary matrix (if \field = \complex).

The condition that the vectors \omega_1,\ldots,\omega_m have independent entries is often too restrictive in practice. It rules out, for instance, the case of uniform vectors on the sphere. If we relax this condition, we get a different optimal distribution:

Optimality (independent test vectors). Consider any set \mathscr{A} of \field-Hermitian matrices which is invariant under \field-unitary similary transformations:

    \[\text{If $A \in \mathscr{A}$ and $U$ is $\field$-unitary, then $U^*AU \in \mathscr{A}$.}\]

Assume that the test vectors \omega_1,\ldots,\omega_m are independent and isotropic (2). The worst-case variance \sup_{A \in \mathscr{A}} \Var(\hat{\tr}) is minimized by choosing \omega_1,\ldots,\omega_m uniformly on the \field-sphere: \omega_1,\ldots,\omega_m \sim \text{Uniform} \{ x \in \field^n : x^*x =n \}.

More simply, if you wants your stochastic trace estimator to be effective for a class of inputs \mathscr{A} (closed under \field-unitary similarity transformations) rather than a single input matrix A, then the best distribution are test vectors drawn uniformly from the sphere. Examples of classes of matrices \mathscr{A} include:

  • Fixed eigenvalues. For fixed real eigenvalues \lambda_1,\ldots,\lambda_n \in \mathbb{R}, the set of all \field-Hermitian matrices with these eigenvalues.
  • Density matrices. The class of all trace-one psd matrices.
  • Frobenius norm ball. The class of all \field-Hermitian matrices of Frobenius norm at most 1.

Derivation of Formulas

In this section, we provide derivations of the variance formulas. I have chosen to focus on derivations which are shorter but use more advanced techniques rather than derivations which are longer but use fewer tricks.

Real Gaussians

First assume A is real. Since A is real symmetric, A has an eigenvalue decomposition A = Q\Lambda Q^\top, where Q is orthogonal and \Lambda is a diagonal matrix reporting A‘s eigenvalues. Since the real Gaussian distribution is invariant under orthogonal transformations, \omega^\top A\omega = (Q^\top \omega)^\top \Lambda (Q^\top\omega) has the same distribution as \omega^\top \Lambda \omega. Therefore,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \Lambda \omega) = \Var \left( \sum_{i=1}^n \lambda_i \omega_i^2 \right) = \sum_{i=1}^n \lambda_i^2 \Var(\omega_i^2) = 2\sum_{i=1}^n \lambda_i^2 = 2\left\|A\right\|_{\rm F}^2.\]

Here, we used that the variance of a squared standard normal random variable is two.

For A non-real matrix, we can break the matrix A into its entrywise real and imaginary parts A = \mathfrak{R}(A) + i \, \mathfrak{I}(A). Thus,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \mathfrak{R}(A) \omega) + \Var(\omega^\top \mathfrak{I}(A) \omega) = 2\left\|\mathfrak{R}(A)\right\|_{\rm F}^2 + 2\left\|\mathfrak{I}(A)\right\|_{\rm F}^2 = 2\left\|A\right\|_{\rm F}^2.\]

Uniform Signs

First, compute

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = \sum_{i,j=1}^n A_{ij} \omega_i\omega_j - \sum_{i=1}^n A_{ii} = \sum_{i\ne j} A_{ij} \omega_i\omega_j + \sum_{i=1}^n A_{ii}(\omega_i^2-1).\]

For a vector \omega of uniform random signs, we have \omega_i^2 = 1 for every i, so the second sum vanishes. Note that we have assumed A symmetric, so the sum over i\ne j can be replaced by two times the sum over i < j:

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = 2\sum_{i< j} A_{ij} \omega_i\omega_j.\]

Note that \{ \omega_i \omega_j : i < j\} are pairwise independent. As a simple exercise, one can verify that the identity

    \[\Var(a_1 X_1+\cdots+a_kX_k) = |a_1|^2 \Var(X_1) + \cdots + |a_k|^2 \Var(X_k)\]

holds for any pairwise independent family of random variances X_1,\ldots,X_k and numbers a_1,\ldots,a_k. Ergo,

    \begin{align*}\Var(\omega^\top A\omega) &= \Var(\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega]) \\&= \Var\left(\sum_{i< j} 2A_{ij} \omega_i\omega_j\right) \\&= \sum_{i<j} 4 |A_{ij}|^2 \Var(\omega_i\omega_j) \\&= \sum_{i<j} 4 |A_{ij}|^2 \\&= 2 \sum_{i\ne j} |A_{ij}|^2.\end{align*}

In the second-to-last line, we use the fact that \omega_i\omega_j is a uniform random sign, which has variance 1. The final line is a consequence of the symmetry of A.

Uniform on the Real Sphere

The simplest proof is I know is by the “camel principle”. Here’s the story (a lightly edited quotation from MathOverflow):

A father left 17 camels to his three sons and, according to the will, the eldest son was to be given a half of the camels, the middle son one-third, and the youngest son the one-ninth. The sons did not know what to do since 17 is not evenly divisible into either two, three, or nine parts, but a wise man helped the sons: he added his own camel, the oldest son took 18/2=9 camels, the second son took 18/3=6 camels, the third son 18/9=2 camels and the wise man took his own camel and went away.

We are interested in a vector \omega which is uniform on the sphere of radius \sqrt{n}. Performing averages on the sphere is hard, so we add a camel to the problem by “upgrading” \omega to a spherically symmetric vector g which has a random length. We want to pick a distribution for which the computation \Var(g^\top A g) is easy. Fortunately, we already know such a distribution, the Gaussian distribution, for which we already calculated \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2.

The Gaussian vector g and the uniform vector \omega on the sphere are related by

    \[g = \sqrt{\frac{a}{n}} \omega,\]

where a is the squared length of the Gaussian vector g. In particular, a has the distribution of the sum of n squared Gaussian random variables, which is known as a \chi^2 random variable with n degrees of freedom.

Now, we take the camel back. Compute the variance of g^\top A g using the chain rule for variance:

    \[\Var(g^\top A g) = \mathbb{E}[\Var(g^\top A g \mid a)] + \Var(\mathbb{E}[g^\top A g \mid a]).\]

Here, \Var(\cdot \mid a) and \mathbb{E}[ \cdot \mid a] denote the conditional variance and conditional expectation with respect to the random variable a. The quick and dirty ways of working with these are to treat the random variable a “like a constant” with respect to the conditional variance and expectation.

Plugging in the formula g = \sqrt{a/n} \cdot \omega and treating a “like a constant”, we obtain

    \begin{align*}\Var(g^\top A g) &= \mathbb{E}[\Var(a/n \cdot \omega^\top A \omega \mid a)] + \Var(\mathbb{E}[a/n \cdot \omega^\top A \omega \mid a]) \\&=\mathbb{E}[(a/n)^2\Var(\omega^\top A \omega)] + \Var(a/n \cdot \mathbb{E}[\omega^\top A \omega]) \\&= \frac{1}{n^2} \mathbb{E}[a^2] \cdot \Var(\omega^\top A \omega) + \frac{1}{n^2} \Var(a) |\mathbb{E} [\omega^\top A \omega]|^2.\end{align*}

As we mentioned, a is a \chi^2 random variable with n degrees of freedom and \mathbb{E}[a^2] and \Var(a) are known quantities that can be looked up:

    \[\mathbb{E}[a^2] = n(n+2), \quad \Var(a) = 2n.\]

We know \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2 and \mathbb{E} [\omega^\top A \omega] = \tr A. Plugging these all in, we get

    \[2\left\|A\right\|_{\rm F}^2 = \frac{n+2}{n} \Var(\omega^\top A\omega) + \frac{2}{n} |\tr A|^2.\]

Rearranging, we obtain

    \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n}|\tr A|^2\right).\]

Complex Gaussians

The trick is the same as for real Gaussians. By invariance of complex Gaussian random vectors under unitary transformations, we can reduce to the case where A is a diagonal matrix populated with eigenvalues \lambda_1,\ldots,\lambda_n. Then

    \[\Var(\omega^*A \omega) = \Var \left( \sum_{i=1}^n \lambda_i |\omega_i|^2 \right) = \sum_{i=1}^n \Var(|\omega_i|^2) \lambda_i^2 = \sum_{i=1}^n \lambda_i^2 = \left\|A\right\|_{\rm F}^2.\]

Here, we use the fact that 2|\omega_i|^2 is a \chi^2 random variable with two degrees of freedom, which has variance four.

Random Phases

The trick is the same as for uniform signs. A short calculation (remembering that A is Hermitian and thus \overline{A_{ij}} = A_{ji}) reveals that

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

The random variables \{\overline{\omega_i} \omega_j : i < j\} are pairwise independent so we have

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right) = 4\sum_{i<j} \Var \left( \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

Since \overline{\omega}_i \omega_j is uniformly distributed on the complex unit circle, we can assume without loss of generality that A_{ij} = |A_{ij}|. Thus, letting \phi be uniform on the complex unit circle,

    \[\Var\left( \omega^* A \omega \right) = 4\sum_{i<j} \Var \left( |A_{ij}|\Re(\phi)) \right) = 4\Var\left( \Re(\phi) \right)\sum_{i<j}|A_{ij}|^2.\]

The real and imaginary parts of \phi have the same distribution so

    \[1 = \Var(\phi) = \Var(\Re \phi) + \Var(\Im \phi) = 2 \Var(\Re \phi)\]

so \Var(\Re \phi) = 1/2. Thus

    \[\Var\left( \omega^* A \omega \right) = 2 \sum_{i<j}|A_{ij}|^2 = \sum_{i\ne j} |A_{ij}|^2.\]

Uniform on the Complex Sphere: Derivation 1 by Reduction to Real Case

There are at least three simple ways of deriving this result: the camel trick, reduction to the real case, and Haar integration. Each of these techniques illustrates a trick that is useful in its own right beyond the context of trace estimation. Since we have already seen an example of the camel trick for the real sphere, I will present the other two derivations.

Let us begin with the reduction to the real case. Let \mathfrak{R}(\cdot) and \mathfrak{I}(\cdot) denote the real and imaginary parts of a vector or matrix, taken entrywise. The key insight is that if \omega is a uniform random vector on the complex sphere of radius \sqrt{n}, then

    \[\mathscr{R}(\omega) := \twobyone{\mathfrak{R}(\omega)}{\mathfrak{I}(\omega)}\in\real^{2n} \quad \text{is a uniform random vector on the real sphere of radius $\sqrt{n}$}.\]

We’ve converted the complex vector \omega into a real vector \mathscr{R}(\omega).

Now, we need to convert the complex matrix A into a real matrix \mathscr{R}(A). To do this, recall that one way of representing complex numbers is by 2\times 2 matrices:

    \[a + bi \iff \twobytwo{a}{-b}{b}{a}.\]

Using this correspondence addition and multiplication of complex numbers can be carried by addition and multiplication of the corresponding matrices.

To convert complex matrices to real matrices, we use a matrix-version of the same representation:

    \[\mathscr{R}(A) = \twobytwo{\mathfrak{R}(A)}{-\mathfrak{I}(A)}{\mathfrak{I}(A)}{\mathfrak{R}(A)}.\]

One can check that addition and multiplication of complex matrices can be carried out by addition and multiplication of the corresponding “realified” matrices, i.e.,

    \[\mathscr{R}(A + B) = \mathscr{R}(A) + \mathscr{R}(B), \quad \mathscr{R}(A\cdot B) = \mathscr{R}(A) \cdot \mathscr{R}(B)\]

holds for all complex matrices A and B.

We’ve now converted complex matrix A and vector \omega into real matrix \mathscr{R}(A) and vector \mathscr{R}(\omega). Let’s compare \omega^*A\omega to \mathscr{R}(\omega)^\top\mathscr{R}(A)\mathscr{R}(\omega). A short calculation reveals

    \[\omega^*A\omega = \mathscr{R}(\omega)^\top \mathscr{R}(A)\mathscr{R}(\omega) .\]

Since \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{n}, \sqrt{2}\cdot \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{2n}. Thus, by the variance formula for the real sphere, we get

    \[\Var(\omega^*A\omega) = \Var[(\sqrt{2}\mathscr{R}(\omega))^\top (\mathscr{R}(A)/2)(\sqrt{2}\mathscr{R}(\omega) )] = \frac{4n}{2n+2} \left[ \|\mathscr{R}(A)/2\|_{\rm F}^2 - \frac{1}{8n}(\tr\mathscr{R}(A))^2 \right].\]

A short calculation verifies that \tr \mathscr{R}(A) = 2\tr A and \norm{\mathscr{R}(A)}_{\rm F}^2 = 2\|A\|_{\rm F}^2. Plugging this in, we obtain

    \[\Var(\omega^*A\omega)= \frac{n}{n+1} \left[ \|A\|_{\rm F}^2 - \frac{1}{n}(\tr A)^2  \right].\]

Uniform on the Complex Sphere: Derivation 2 by Haar Integration

The proof by reduction to the real case requires some cumbersome calculations and requires that we have already computed the variance in the real case by some other means. The method of Haar integration is more slick, but it requires some pretty high-power machinery. Haar integration may be a little bit overkill for this problem, but this technique is worth learning as it can handle some truly nasty expected value computations that appear, for example, in quantum information.

We seek to compute

    \[\mathbb{E} [(\omega^*A \omega)^2].\]

The first trick will be to write this expession using a single matrix trace using the tensor (Kronecker) product \otimes. For those unfamiliar with the tensor product, the main properties we will be using are

(6)   \[(A\otimes B) (C\otimes D) = (AB) \otimes (CD), \quad \tr(A\otimes B) = \tr A \cdot \tr B. \]

We saw in the proof of unbiasedness that

    \[\omega^* A \omega = \tr (\omega^*A\omega) = \tr (A \omega\omega^*).\]

Therefore, by (6),

    \[(\omega^*A\omega)^2 = (\tr [A \omega\omega^*])^2 = \tr [A\omega\omega^* \otimes A\omega\omega^*] = \tr [(A\otimes A) (\omega\omega^* \otimes \omega\omega^*)].\]

Thus, to evaluate \mathbb{E}[(\omega^*A\omega)^2], it will be sufficient to evaluate \mathbb{E}[\omega\omega^* \otimes \omega\omega^*]. Forunately, there is a useful formula for these expectation provided by a field of mathematics known as representation theory (see Lemma 1 in this paper):

    \[\mathbb{E}[ \omega\omega^* \otimes \omega\omega^*] = \frac{2n}{n+1} \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}.\]

Here, \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)} is the orthogonal projection onto the space of symmetric two-tensors \operatorname{Sym}^2(\complex^n) = \operatorname{span} \{ v \otimes v : v \in \complex^n \}. Therefore, we have that

    \[\mathbb{E}[(\omega^*A\omega)^2] = \tr [(A\otimes A) \mathbb{E}(\omega\omega^* \otimes \omega\omega^*)] = \frac{2n}{n+1} \tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}].\]

To evalute the trace on the right-hand side of this equation, there is another formula (see Lemma 6 in this paper):

    \[\tr \left[(A\otimes B) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}\right] = \frac{1}{2} \left( \tr(AB) + \tr A \cdot \tr B \right).\]

Therefore, we conclude

    \begin{align*}\Var(\omega^* A \omega) &= \mathbb{E}[(\omega^*A\omega)^2] - (\mathbb{E}[\omega^*A\omega])^2 \\&= \frac{2n}{n+1}\tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \tr A^2 + (\tr A)^2 \right] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \left\|A\right\|_{\rm F}^2 - \frac{1}{n} (\tr A)^2 \right].\end{align*}

Proof of Optimality Properties

In this section, we provide proofs of the two optimality properties.

Optimality: Independent Vectors with Independent Coordinates

Assume A is real and symmetric and suppose that \omega is isotropic (2) with independent coordinates. The isotropy condition

    \[\mathbb{E}[\omega\omega^\top] = I\]

implies that \mathbb{E}[\omega_i\omega_j] = \delta_{ij}, where \delta is the Kronecker symbol. Using this fact, we compute the second moment:

    \begin{align*}\mathbb{E}[ (\omega^*A \omega)^2] &= \mathbb{E}\left[ \left( \sum_{i=1}^n A_{ii} \omega_i^2 +2 \sum_{i<j} A_{ij}\omega_i\omega_j) \right)^2\right] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) \mathbb{E}[\omega_i^2]\mathbb{E}[\omega_j^2] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) .\end{align*}

Thus

    \[\Var(\omega^*A\omega) = \mathbb{E}[ (\omega^*A \omega)^2] - (\mathbb{E}[\omega^* A \omega])^2 = \sum_{i=1}^n A_{ii}^2 (\mathbb{E}[|\omega_i|^4]-1) + 4\sum_{i<j} A_{ij}^2.\]

The variance is minimized by choosing \omega with \mathbb{E} \omega_i^4 as small as possible. Since \mathbb{E} \omega_i^2 = 1, the smallest possible value for \mathbb{E} \omega_i^4 is \mathbb{E} \omega_i^4 = 1, which is obtained by populating \omega with random signs.

Optimality: Independent Vectors

This result appears to have first been proven by Richard Kueng in unpublished work. We use an argument suggested to me by Robert J. Webber.

Assume \mathscr{A} is a class of \field-Hermitian matrices closed under \field-unitary similarity transformations and that \omega is an isotropic random vector (2). Decompose the test vector as

    \[\omega = a \cdot s \quad \text{for} \quad a \in [0,+\infty), \: s \in\{x\in \field^n : x^*x = n \}.\]

First, we shall show that the variance is reduced by replacing s with a vector t drawn uniformly from the sphere

(7)   \[\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}) \le \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega \]

where

(8)   \[\tilde{\omega} = a\cdot t \quad \text{and}\quad t\sim \text{Uniform} \{ x \in \field^n :x^*x = n \} \quad \text{is independent of $a$}. \]

Note that such a t can be generated as t = Qs for a uniformly random \field-unitary matrix Q. Therefore, we have

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[(\tilde{\omega}^*A\tilde{\omega})^2] - (\tr A)^2\right]\\&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Now apply Jensen’s inequality only over the randomness in Q to obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Finally, note that since \mathscr{A} is closed under \field-unitary similarity transformations, the supremum over Q^*AQ for A \in \mathscr{A} is the same as the supremum of A \in \mathscr{A}, so we obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&= \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*As] - (\tr A)^2\right] \\&= \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega).\end{align*}

We have successfully proven (7). This argument is a specialized version of a far more general result which appears as Proposition 4.1 in this paper.

Next, we shall prove

(9)   \[\sup_{A\in\mathscr{A}} \Var(t^*At) \le \sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}), \]

where t is still defined as in (8). Indeed, using the chain rule for variance, we obtain

    \begin{align*}\Var(\tilde{\omega}^*A\tilde{\omega})&= \Var(a^2\cdot t^*At) \\&= \mathbb{E}[\Var(a^2\cdot t^* A t \mid a)] + \Var(\mathbb{E}[a^2\cdot t^* A t \mid a]) \\&= \mathbb{E}[a^4]\Var(t^* A t )+ (\tr A)^2\Var(a^2) \\&\ge \mathbb{E}[a^4]\Var(t^* A t ).\end{align*}

Here, we have used that t is uniform on the sphere and thus \mathbb{E}[t^*At] = \tr A. By definition, a is the length of \omega divided by \sqrt{n}. Therefore,

    \[\mathbb{E}[a^2] = \frac{1}{n}\mathbb{E}[\omega^*\omega] = \frac{1}{n} \mathbb{E}[\tr (\omega\omega^*)] = \frac{1}{n} \tr (\mathbb{E}[\omega\omega^*]) = \frac{\tr I}{n} = 1.\]

Therefore, by Jensen’s inequality,

    \[\mathbb{E}[a^4] = \mathbb{E}[(a^2)^2] \ge (\mathbb{E}[a^2])^2 = 1.\]

Thus

    \[\Var(\tilde{\omega}^*A\tilde{\omega}) \ge \mathbb{E}[a^4]\Var(t^* A t ) \ge \Var(t^*At) \quad \text{for every }A,\]

which proves (9).

Chebyshev Polynomials

This post is co-written by my brother, Aidan Epperly, for the second Summer of Math Exposition (SoME2).


Let’s start with a classical problem: connect-the-dots. As we know from geometry, any two points in the plane are connected by one and only one straight line:

But what if we have more than two points? How should we connect them? One natural way is by parabola. Any three points (with distinct x coordinates) are connected by one and only one parabola ax^2+bx+c:

And we can keep extending this. Any n+1 points1The degree of the polynomial is one less than the number of points because a degree-n polynomial is described by n+1 coefficients. For instance, a degree-two parabola ax^2+bx+c has three coefficients a, b, and c. (with distinct x coordinates) are connected by a unique degree-n polynomial a_n x^n + a_{n-1}x^{n-1} + \cdots + a_1 x+a_0:

This game of connect-the-dots with polynomials is known more formally as polynomial interpolation. We can use polynomial interpolation to approximate functions. For instance, we can approximate the function \sin(x) on the interval [-\pi,\pi] to visually near-perfect accuracy by connecting the dots between seven points (-\pi,\sin(-\pi)), (-\tfrac{2}{3}\pi,\sin(-\tfrac{2}{3}\pi)),\ldots ,(\pi,\sin(\pi)):

But something very peculiar happens when we try and apply this trick to the specially chosen function R(x) = 1/(1+25x^2) on the interval [-1,1]:

Unlike \sin(x), the polynomial interpolant for R(x) is terrible! What’s going on? Why doesn’t polynomial interpolation work here? Can we fix it? The answer to the last question is yes and the solution is Chebyshev polynomials.

Reverse-Engineering Chebyshev

The failure of polynomial interpolation for R(x) is known as Runge’s phenomenon after Carl Runge who discovered this curious behavior in 1901. The function R(x) is called the Runge function. Our goal is to find a fix for polynomial interpolation which crushes the Runge phenomenon, allowing us to reliably approximate every sensible2A famous theorem of Faber states that there does not exist any set of points through which the polynomial interpolants converge for every continuous function. This is not as much of a problem as it may seem. As the famous Weierstrass function shows, arbitrary continuous functions can be very weird. If we restrict ourselves to nicer functions, such as Lipschitz continuous functions, there does exist a set of points through which the polynomial interpolant always converges to the underlying function. Thus, in this senses, it is possible to crush the Runge phenomenon. function with polynomial interpolation.

Carl Runge

Let’s put on our thinking caps and see if we can discover the fix for ourselves. In order to discover a fix, we must first identify the problem. Observe that the polynomial interpolant is fine near the center of the interval; it only fails near the boundary.

This leads us to a guess for what the problem might be; maybe we need more interpolation points near the boundaries of the interval. Indeed, tipping our hand a little bit, this turns out to be the case. For instance, connecting the dots for the following set of “mystery points” clustered at the endpoints works just fine:

Let’s experiment a little and see if we can discover a nice set of interpolation points, which we will call x_0,x_1,\ldots,x_n, like this for ourselves. We’ll assume the interpolation points are given by a function x_j = g(j/n) so we can form the polynomial interpolant for any desired polynomial degree n.3Technically, we should insist on the function g(\cdot) being \textit{injective} so that the points g(0),g(1/n),\ldots,g(1) are guaranteed to be distinct. For instance, if we pick g(t) = 2t^2-1, the points look like this:

Equally spaced points j/n (shown on vertical axis) give rise to
non-equally spaced points g(j/n) (shown on horizontal axis)

How should we pick the function g(\cdot)? First observe that, even for the Runge function, equally spaced interpolation points are fine near the center of the interval. We thus have at least two conditions for our desired n+1 interpolation points:

  1. The interior points should maintain their spacing of roughly 2/n.
  2. The points must cluster near both boundaries.

As a first attempt let’s divide the interval into thirds and halve the spacing of points except in the middle third. This leads to the function

    \begin{equation*}g(x) = \begin{cases} -1+x, & 0\le x < \tfrac{1}{3}, \\-\tfrac{2}{3} + 4\left( x-\tfrac{1}{3}\right), & \tfrac{1}{3} \le x < \tfrac{2}{3}, \\\tfrac{2}{3} + \left( x-\tfrac{2}{3}\right), & \tfrac{2}{3} \le x \le 1.\end{cases}\end{equation*}

These interpolation points initially seem promising, even successfully approximating the Runge function itself.

Unfortunately, this set of points fails when we consider other functions. For instance, if we use the Runge-like function S(x) = 1/(1+900x^2), we see that these interpolation points now lead to a failure to approximate the function at the middle of the interval, even if we use a lot of interpolation points!

Maybe the reason this set of interpolation points didn’t work is that the points are too close at the endpoints. Or maybe we should have divided the interval as quarter–half–quarter rather than thirds. There are lots of variations of this strategy for choosing points to explore and all of them eventually lead to failure on some Runge-flavored example. We need a fundamentally different strategy then making the points a times closer within distance b of the endpoints.

Let’s try a different approach. The closeness of the points at the endpoints is determined by the slope of the function g at 0 and 1. The smaller that |g'(0)| and |g'(1)| are, the more clustered the points will be. For instance,

    \begin{equation*}g'(0) = g'(1) = 2 \quad \text{for equally spaced points}.\end{equation*}

When we halved the distance between points, we instead had

    \begin{equation*}g'(0) = g'(1) = 1 \quad \text{when points at ends were twice as close together}.\end{equation*}

So if we want the points to be much more clustered together, it is natural to require

    \begin{equation*}g'(0) = g'(1) = 0. \quad \text{(new requirement)}\end{equation*}

It also makes sense for the function g(\cdot) to cluster points equally near both endpoints, since we see no reason to preference one end over the other. Collecting together all the properties we want the function g(\cdot) to have, we get the following list:

  1. g spans the whole range [-1,1],
  2. g'(0) = g'(1) = 0, and
  3. g is symmetric about 1/2, g(1/2+x) = -g(1/2-x).

Mentally scrolling through our Rolodex of friendly functions, a natural one that might come to mind meeting these three criteria is the cosine function, specifically g(t) = \cos(\pi t). This function yields points which are more clustered at the endpoints:

The points

    \begin{equation*}x_j = \cos\left(\frac{j\pi}{n}\right)\end{equation*}

we guessed our way into are known as the Chebyshev points.4Some authors refer to these as the “Chebyshev points of the second kind” or use other names. We follow the convention in Approximation Theory and Approximation Practice (Chapter 1) and simply refer to these points simply as the Chebyshev points. The Chebyshev points prove themselves perfectly fine for the Runge function:

As we saw earlier, success on the Runge function alone is not enough to declare victory for the polynomial interpolation problem. However, in this case, there are no other bad examples left to find. For any nice function with no jumps, polynomial interpolation through the Chebyshev points works excellently.5Specifically, for a function f(\cdot) which not too rough (i.e., Lipschitz continuous), the degree-n polynomial interpolant of f(\cdot) through the Chevyshev points converges uniformly to f(\cdot) as n\to\infty.

Why the Chebyshev Points?

We’ve guessed our way into a solution to the polynomial interpolation problem, but we still really don’t know what’s going on here. Why are the Chebyshev points much better at polynomial interpolation than equally spaced ones?

Now that we know that the Chebyshev points are a right answer to the interpolation problem,6Indeed, there are other sets of interpolation points through which polynomial interpolation also works well, such as the Legendre points. let’s try and reverse engineer a principled reason for why we would expect them to be effective for this problem. To do this, we ask:

What is special about the cosine function?

From high school trigonometry, we know that \cos \theta gives the x coordinate of a point \theta radians along the unit circle. This means that the Chebyshev points are the x coordinates of equally spaced points on the unit circle (specifically the top half of the unit circle 0\le \theta\le \pi).

Chebyshev points are the x coordinates of equally spaced points on the unit circle.

This raises the question:

What does the interpolating polynomial p(x) look like as a function of the angle \theta?

To convert between x and \theta we simply plug in x = \cos \theta to p(x):

    \begin{equation*}p^\circ(\theta) = p(\cos \theta) = a_n \cos^n \theta + a_{n-1} \cos^{n-1} \theta + \cdots + a_0.\end{equation*}

This new function depending on \theta, which we can call p^\circ(\theta), is a polynomial in the variable \cos \theta. Powers of cosines are not something we encounter every day, so it makes sense to try and simplify things using some trig identities. Here are the first couple powers of cosines:

    \begin{gather*}\cos^2 \theta = \frac{1}{2} + \frac{1}{2} \cos (2\theta), \\\cos^3 \theta = \frac{3}{4}\cos \theta + \frac{1}{4} \cos (3\theta), \\\cos^4 \theta = \frac{3}{8}+ \frac{1}{2} \cos (2\theta) + \frac{1}{8} \cos (4\theta),\\\vdots\end{gather*}

A pattern has appeared! The powers \cos^k \theta always take the form7As a fun exercise, you might want to try and prove this using mathematical induction.

    \begin{equation*}\cos^k \theta = \textnormal{(some number)} \cdot \cos(k\theta) + \textnormal{(some number)} \cdot \cos((k-2)\theta) + \cdots .\end{equation*}

The significance of this finding is that, by plugging in each of these formulas for \cos^k \theta, we see that our polynomial p(x) in the variable x has morphed into a Fourier cosine series in the variable \theta:

    \begin{equation*}p^\circ(\theta) = b_n \cos(n\theta) + b_{n-1} \cos((n-1)\theta) + \cdots + b_1 \cos \theta + b_0.\end{equation*}

For anyone unfamiliar with Fourier series, we highly encourage the 3Blue1Brown video on the subject, which explains why Fourier series are both mathematically beautiful and practically useful. The basic idea is that almost any function can be expressed as a combination of waves (that is, sines and cosines) with different frequencies.8More precisely, we might call these angular frequencies. In our case, this formula tells us that p^\circ(\theta) is equal to b_0 units of frequency 0, plus b_1 units of frequency 1, all the way up to b_n units of frequency n. Different types of Fourier series are appropriate in different contexts. Since our Fourier series only possesses cosines, we call it a Fourier cosine series.

We’ve discovered something incredibly cool:

Polynomial interpolation through the Chebyshev points is equivalent to finding a Fourier cosine series for equally spaced angles \theta.

We’ve arrived at an answer to why the Chebyshev points work well for polynomial interpolation.

Polynomial interpolation through the Chebyshev points is effective because Fourier cosine series through equally spaced angles \theta is effective.

Of course, this explanation just raises the further question: Why do Fourier cosine series give effective interpolants through equally spaced angles \theta? This question has a natural answer as well, involving the convergence theory and aliasing formula (see Section 3 of this paper) for Fourier series. We’ll leave the details to the interested reader for investigation. The success of Fourier cosines series in interpolating equally spaced data is a fundamental observation that underlies the field of digital signal processing. Interpolation through the Chebyshev points effectively hijacks this useful fact and applies it to the seemingly unrelated problem of polynomial interpolation.

Another question this explanation raises is the precise meaning of “effective”. Just how good are polynomial interpolants through the Chebyshev points at approximating functions? As is discussed at length in another post on this blog, the degree to which a function can be effectively approximated is tied to how smooth or rough it is. Chebyshev interpolants approximate nice analytic functions like \sin(x) or 1/(1+25x^2) with exponentially small errors in the number of interpolation points used. By contrast, functions with kinks like |x| are approximated with errors which decay much more slowly. See theorems 2 and 3 on this webpage for more details.

Chebyshev Polynomials

We’ve now discovered a set of points, the Chebyshev points, through which polynomial interpolation works well. But how should we actually compute the interpolating polynomial

    \begin{equation*}p(x) = a_n x^n + a_{n-1}x^{n-1} + \cdots + a_0?\end{equation*}

Again, it will be helpful to draw on the connection to Fourier series. Computations with Fourier series are highly accurate and can be made lightning fast using the fast Fourier transform algorithm. By comparison, directly computing with a polynomial p(x) through its coefficients a_n,a_{n-1},\ldots,a_0 is a computational nightmare.

In the variable \theta, the interpolant takes the form

    \begin{equation*}p^\circ(\theta) = b_n \cos(n\theta) + b_{n-1} \cos((n-1)\theta) + \cdots + b_1 \cos \theta + b_0.\end{equation*}

To convert back to x = \cos \theta, we use the inverse function9One always has to be careful when going from x = \cos \theta to \theta = \arccos x since multiple \theta values get mapped to a single x value by the cosine function. Fortunately, we’re working with variables 0\le \theta\le \pi and -1\le x\le 1, between which the cosine function is one-to-one with the inverse function being given by the arccosine. \theta = \arccos x to obtain:

    \begin{equation*}p(x) = b_n \cos(n\arccos(x)) + \cdots + b_1 \cos(\arccos x) + b_0\end{equation*}

This is a striking formula. Given all of the trigonometric functions, it’s not even obvious that p(x) is a polynomial (it is)!

Despite its seeming peculiarity, this is a very powerful way of representing the polynomial p(x). Rather than expressing p(x) using monomials 1,x,x^2,\ldots, we’ve instead written p(x) as a combination of more exotic polynomials

    \begin{equation*}T_k(x) = \cos(k \arccos x) \quad \text{for $k=0,1,2,\ldots n$}.\end{equation*}

The polynomials T_0(x),T_1(x),T_2(x),\ldots are known as the Chebyshev polynomials,10More precisely, the polynomials T_k(x) are known as the Chebyshev polynomials of the first kind. named after Pafnuty Chebyshev who studied the polynomials intensely.11The letter “T” is used for Chebyshev polynomials since the Russian name “Chebyshev” is often alternately transliterated to English as “Tchebychev”.

Pafnuty Chebyshev

Writing out the first few Chebyshev polynomials shows they are indeed polynomials:

    \begin{gather*}T_0(x) = 1, \\T_1(x) = x, \\T_2(x) = 2x^2 - 1, \\ T_3(x) = 4x^3 - 3x, \\\vdots \end{gather*}

The first four Chebyshev polynomials

To confirm that this pattern does continue, we can use trig identities to derive12Specifically, the recurrence is a consequence of applying the sum-to-product identity to \cos((k+1)\theta) + \cos((k-1)\theta) for \theta = \arccos x. the following recurrence relation for the Chebyshev polynomials:

    \begin{equation*}T_{k+1}(x) = 2x T_k(x) - T_{k-1}(x).\end{equation*}

Since T_0(x) = 1 and T_1(x) = x are both polynomials, every Chebyshev polynomial is as well.

We’ve arrived at the following amazing conclusion:

Under the change of variables x = \cos \theta, the Fourier cosine series

    \[p^\circ(\theta) = b_n\cos(n\theta) + \cdots + b_1\cos\theta + b_0\]

becomes the combination of Chebyshev polynomials

    \[p(x) = b_nT_n(x) + \cdots + b_1 T_1(x) + b_0.\]

This simple and powerful observations allows us to apply the incredible speed and accuracy of Fourier series to polynomial interpolation.

Beyond being a neat idea with some nice mathematics, this connection between Fourier series and Chebyshev polynomials is a powerful tool for solving computational problems. Once we’ve accurately approximated a function by a polynomial interpolant, many quantities of interest (derivatives, integrals, zeros) become easy to compute—after all, we just have to compute them for a polynomial! We can also use Chebyshev polynomials to solve differential equations with much faster rates of convergence than other methods. Because of the connection to Fourier series, all of these computations can be done to high accuracy and blazingly fast via the fast Fourier transform, as is done in the software package Chebfun.

The Chebyshev polynomials have an array of amazing properties and they appear all over mathematics and its applications in other fields. Indeed, we have only scratched the surface of the surface. Many questions remain:

  • What is the connection between the Chebyshev points and the Chebyshev polynomials?
  • The cosine functions 1,\cos \theta,\cos(2\theta),\ldots are orthogonal to each other; are the Chebyshev polynomials?
  • Are the Chebyshev points the best points for polynomial interpolation? What does “best” even mean in this context?
  • Every “nice” even periodic function has an infinite Fourier cosine series which converges to it. Is there a Chebyshev analog? Is there a relation between the infinite Chebyshev series and the (finite) interpolating polynomial through the Chebyshev points?

All of these questions have beautiful and fairly simple answers. The book Approximation Theory and Approximation Practice is a wonderfully written book that answers all of these questions in its first six chapters, which are freely available on the author’s website. We recommend the book highly to the curious reader.

TL;DR: To get an accurate polynomial approximation, interpolate through the Chebyshev points.
To compute the resulting polynomial, change variables to \theta = \arccos x, compute the Fourier cosine series interpolant, and obtain your polynomial interpolant as a combination of Chebyshev polynomials.

Don’t Solve the Normal Equations

The (ordinary) linear least squares problem is as follows: given an m\times n matrix A and a vector b of length m, find the vector x such that Ax is as close to b as possible, when measured using the two-norm \| \cdot \|. That is, we seek to

(1)   \begin{equation*} \mbox{find } x \in\mathbb{R}^n \mbox{ such that }\| b - Ax \|^2 = \sum_{i=1}^m \left(b_i - \sum_{j=1}^n A_{ij} x_j \right)^2 \mbox{ is minimized}. \end{equation*}

From this equation, the name “least squares” is self-explanatory: we seek x which minimizes the sum of the squared discrepancies between the entries of b and Ax.

The least squares problem is ubiquitous in science, engineering, mathematics, and statistics. If we think of each row a_i of A as an input and its corresponding entry b_i of b as an output, then the solution x to the least squares model gives the coefficients of a linear model for the input–output relationship. Given a new previously unseen input a_{\rm new}, our model predicts the output b_{\rm new} is approximately b_{\rm new} \approx a_{\rm new}^\top x = \sum_{i=1}^n x_i (a_{\rm new})_i. The vector x consists of coefficients for this linear model. The least squares solution satisfies the property that the average squared difference between the output b_i and the prediction a_i^\top x is as small as it could possibly be for all choices of coefficient vectors x.

How do we solve the least squares problem? A classical solution approach, ubiquitous in textbooks, is to solve a system of linear equations known as the normal equations. The normal equations associated with the least squares problem (1) are given by

(2)   \begin{equation*} A^\top A \,x = A^\top b. \end{equation*}

This system of equations always has a solution. If A has full column-rank, then A^\top A is invertible and the unique least squares solution to (1) is given by (A^\top A)^{-1} A^\top b. We assume that A has full column-rankQ for the rest of this discussion. To solve the normal equations in software, we compute A^\top A and A^\top b and solve (2) using a linear solver like MATLAB’s “\”.1Even better, we could us a Cholesky decomposition since the matrix A^\top A is positive definite. (As is generally true in matrix computations, it is almost never a good idea to explicitly form the inverse of the matrix A^\top A, or indeed any matrix.) We also can solve the normal equations using an iterative method like (preconditioned) conjugate gradient.

The purpose of the article is to advocate against the use of the normal equations for solving the least squares problems, at least in most cases. So what’s wrong with the normal equations? The problem is not that the normal equations aren’t mathematically correct. Instead, the problem is that the normal equations often lead to poor accuracy for the least squares solution using computer arithmetic.

Most of the time when using computers, we store real numbers as floating point numbers.2One can represent rational numbers on a computer as fractions of integers and operations can be done exactly. However, this is prone to gross inefficiencies as the number of digits in the rational numbers can grow to be very large, making the storage and time to solve linear algebra problems with rationals dramatically more expensive. For these reasons, the vast majority of numerical computations use floating point numbers which store only a finite number of digits for any given real number. In this model, except for extremely rare circumstances, rounding errors during arithmetic operations are a fact of life. At a coarse level, the right model to have in your head is that real numbers on a computer are stored in scientific notation with only 16 decimal digits after the decimal point.3This is a simplification in multiple ways. First, computers store numbers in binary and thus, rather than storing 16 decimal digits after the decimal point, they store 52 binary digits. This amounts to roughly 16 decimal digits. Secondly, there are different formats for storing real numbers as floating point on a computer with different amounts of stored digits. The widely used IEEE double precision format has about 16 decimal digits of accuracy; the IEEE single precision format has roughly 8. When two numbers are added, subtracted, multiplied, and divided, the answer is computed and then rounded to 16 decimal digits; any extra digits of information are thrown away. Thus, the result of our arithmetic on a computer is the true answer to the arithmetic problem plus a small rounding error. These rounding errors are small individually, but solving an even modestly sized linear algebra problem requires thousands of such operations. Making sure many small errors don’t pile up into a big error is part of the subtle art of numerical computation.

To make a gross simplification, if one solves a system of linear equations Mx = c on a computer using a well-designed piece of software, one obtains an approximate solution \hat{x} which is, after accounting for the accumulation of rounding errors, close to x. But just how close the computed solution \hat{x} and the true solution x are depends on how “nice” the matrix M is. The “niceness” of a matrix M is quantified by a quantity known as the condition number of M, which we denote \kappa(M).4In fact, there are multiple definitions of the condition number depending on the norm which one uses the measure the sizes of vectors. Since we use the 2-norm, the appropriate 2-norm condition number \kappa(M) is the ratio \kappa(M) = \sigma_{\rm max}(M)/\sigma_{\rm min}(M) of the largest and smallest singular values of M. As a rough rule of thumb, the relative error between x and \hat{x} is roughly bounded as

(3)   \begin{equation*} \frac{\| \hat{x} - x \|}{\|x\|} \lessapprox \kappa(M)\times 10^{-16}. \end{equation*}

The “10^{-16} corresponds to the fact we have roughly 16 decimal digits of accuracy in double precision floating point arithmetic. Thus, if the condition number of M is roughly 10^{10}, then we should expect around 6 digits of accuracy in our computed solution.

The accuracy of the least squares problem is governed by its own condition number \kappa(A). We would hope that we can solve the least squares problem with an accuracy like the rule-of-thumb error bound (3) we had for linear systems of equations, namely a bound like \|\hat{x} - x\|/\|x\| \lessapprox \kappa(A)\times 10^{-16}. But this is not the kind of accuracy we get for the least squares problem when we solve it using the normal equations. Instead, we get accuracy like

(4)   \begin{equation*} \frac{\| \hat{x} - x \|}{\|x\|} \lessapprox \left(\kappa(A)\right)^2\times 10^{-16}. \end{equation*}

By solving the normal equations we effectively square the condition number! Perhaps this is not surprising as the normal equations also more-or-less square the matrix A by computing A^\top A. This squared condition number drastically effects the accuracy of the computed solution. If the condition number of A is 10^{8}, then the normal equations give us absolute nonsense for \hat{x}; we expect to get no digits of the answer x correct. Contrast this to above, where we were able to get 6 correct digits in the solution to Mx = c despite the condition number of M being 100 times larger than A!

All of this would be just a sad fact of life for the least squares problem if the normal equations and their poor accuracy properties were the best we could do for the least squares problem. But we can do better! One can solve linear least squares problems by computing a so-called QR factorization of the matrix A.5In MATLAB, the least squares problem can be solved with QR factorization by calling “A\b”. Without going into details, the upshot is that the least squares solution by a well-designed6One way of computing the QR factorization is by Gram–Schmidt orthogonalization, but the accuracy properties of this are poor too. A gold-standard way of computing the QR factorization by means of Householder reflectors, which has excellent accuracy properties. QR factorization requires a similar amount of time to solving the normal equations and has dramatically improved accuracy properties, achieving the desirable rule-of-thumb behavior7More precisely, the rule of thumb is like \|\hat{x} - x\|/\|x\| \lessapprox \kappa(A)\times 10^{-16} \times(1+ \kappa(A)\| b - Ax \|/(\|A\|\|b\|)). So even if we solve the least squares problem with QR factorization, we still get a squared condition number in our error bound, but this condition number squared is multiplied by the residual \|b-Ax\|, which is small if the least squares fit is good. The least squares solution is usually only interesting when the residual is small, thus justifying dropping it in the rule of thumb.

(5)   \begin{equation*} \frac{\| \hat{x} - x \|}{\|x\|} \lessapprox \kappa(A)\times 10^{-16}. \end{equation*}

I have not described how the QR factorization is accurately computed nor how to use the QR factorization to solve least squares problems nor even what the QR factorization is. All of these topics are explained excellently by the standard textbooks in this area, as well as by publicly available resources like Wikipedia. There’s much more that can be said about the many benefits of solving the least squares problem with the QR factorization,8E.g., it can work for sparse matrices while the normal equations often do not, it has superior accuracy to Gaussian elimination with partial pivoting even for solving linear systems, the “Q” matrix in the QR factorization can be represented implicitly as a product of easy-to-compute-with Householder reflectors which is much more efficient whenm\gg n, etc. but in the interest of brevity let me just say this: TL;DR when presented in the wild with a least squares problem, the solution method one should default to is one based on a well-implemented QR factorization, not solving the normal equations.

Suppose for whatever reason we don’t have a high quality QR factorization algorithm at our disposal. Must we then resort to the normal equations? Even in this case, there is a way we can reduce the problem of solving a least squares problems to a linear system of equations without squaring the condition number! (For those interested, to do this, we recognize the normal equations as a Schur complement of a somewhat larger system of linear equations and then solve that. See Eq. (7) in this post for more discussion of this approach.) 

The title of this post Don’t Solve the Normal Equations is deliberately overstated. There are times when solving the normal equations is appropriate. If A is well-conditioned with a small condition number, squaring the condition number might not be that bad. If the matrix A is too large to store in memory, one might want to solve the least squares problem using the normal equations and the conjugate gradient method.

However, the dramatically reduced accuracy of solving the normal equations should disqualify the approach from being the de-facto way of solving least squares problems. Unless you have good reason to think otherwise, when you see A^\top A, solve a different way.

Sherman–Morrison for Integral Equations

In this post, I want to discuss two of my favorite topics in applied mathematics: the Sherman–Morrison formula and integral equations. As a bridge between these ideas, we’ll use an integral equation analog of the Sherman–Morrison formula to derive the solution for the Laplace equation with Dirichlet boundary conditions in the 2D disk.

Laplace’s Equation

Suppose we have a thin, flat (two-dimensional) plate of homogeneous material and we measure the temperature at the border. What is the temperature inside the material? The solution to this problem is described by Laplace’s equation, one of the most ubiquitous partial differential equations in physics. Let u(x,y) denote the temperature of the material at point (x,y). Laplace’s equation states that, at any point (x,y) on the interior of the material,

(1)   \begin{equation*} \frac{\partial^2}{\partial x^2} u(x,y) + \frac{\partial^2}{\partial y^2} u(x,y) = 0. \end{equation*}

Laplace’s equation (1) and the specification of the temperature on the boundary form a well-posed mathematical problem in the sense that the temperature is uniquely determined at each point (x,y).1A well-posed problem is also required to depend continuously on the input data which, in this case, are the boundary temperatures. Indeed, the Laplace problem with boundary data is well-posed in this sense. We call this problem the Laplace Dirichlet problem since the boundary conditions

    \begin{equation*} u(x,y) \quad \text{is specified for $(x,y)$ on the boundary} \end{equation*}

are known as Dirichlet boundary conditions.

The Double Layer Potential

Another area of physics where the Laplace equation (1) appears is the study of electrostatics. In this case, u(x,y) represents the electric potential at the point (x,y). The Laplace Dirichlet problem is to find the electric potential in the interior of the region with knowledge of the potential on the boundary.

The electrostatic application motivates a different way of thinking about the Laplace equation. Consider the following question:

How would I place electric charges on the boundary to produce the electric potential u(x,y) at each point (x,y) on the boundary?

This is a deliciously clever question. If I were able to find an arrangement of charges answering the question, then I could calculate the potential u(x,y) at each point (x,y) in the interior by adding up the contribution to the electric potential of each element of charge on the boundary. Thus, I can reduce the problem of finding the electric potential at each point (x,y) in the 2D region to finding a charge distribution on the 1D boundary to that region.

We shall actually use a slight variant of this charge distribution idea which differs in two ways:

  • Rather than placing simple charges on the boundary of the region, we place charge dipoles.2The reason for why this modification works better is an interesting question, but answering it properly would take us too far afield from the goals of this article.
  • Since we are considering a two-dimensional problem, we use a different formula for the electric potential than given by Coulomb’s law for charges in 3D. Also, since we are interested in solving the Laplace Dirichlet problem in general, we can choose a convenient dimensionless system of units. We say that the potential at a point (x,y) induced by a unit “charge” at the origin is given by 1/2\pi \cdot \ln \sqrt{x^2+y^2}.

With these modifications, our new question is as follows:

How would I place a density of “charge” dipoles \phi(x,y) on the boundary to produce the electric potential u(x,y) at each point (x,y) on the boundary?

We call this function \phi(x,y) the double layer potential for the Laplace Dirichlet problem. One can show the double layer potential satisfies a certain integral equation. To write down this integral equation, let’s introduce some more notation. Let R be the region of interest and \partial R its boundary. Denote points (x,y) concisely as vectors \mathbf{r} = (x,y), with the length of \mathbf{r} denoted \|\mathbf{r}\| = \sqrt{x^2+y^2}. The double layer potential satisfies

(2)   \begin{equation*} \frac{1}{2} \phi(\mathbf{r}) + \frac{1}{2\pi} \int_{\partial R} \phi(\mathbf{r}') \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}') = u(\mathbf{r}), \end{equation*}

where the integral is taken over the surface \partial R of the region R; \partial / \partial \nu_{\mathbf{r}'} denotes the directional derivative taken in the direction normal (perpendicular) to the surface at the point \mathbf{r}'. Note we choose a unit system for \phi which hides physical constants particular to the electrostatic context, since we are interested in applying this methodology to the Laplace Dirichlet problem in general (possibly non-electrostatic) applications.

There’s one last ingredient: How do we compute the electric potential u(\mathbf{r}) at points \mathbf{r} in the interior of the region? This is answered by the following formula:

(3)   \begin{equation*} u(\mathbf{r}) = \frac{1}{2\pi} \int_{\partial R} \phi(\mathbf{r}') \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}'). \end{equation*}

The integral equation (2) is certainly nothing to sneeze at. Rather than trying to comprehend it in its full glory, we shall focus on a special case for the rest of our discussion. Suppose the region R is a circular disk with radius r centered at 0. The the partial derivative in the integrand in (2) then is readily computed for points \mathbf{r} and \mathbf{r}' both on the boundary \partial R of the circle:

    \begin{equation*} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| = \frac{1}{2r}. \end{equation*}

Substituting in (2) then gives

(4)   \begin{equation*} \frac{1}{2} \phi(\mathbf{r}) + \frac{1}{4\pi r} \int_{\partial R} \phi(\mathbf{r}')  \, dS(\mathbf{r}') = u(\mathbf{r}). \end{equation*}

The Sherman–Morrison Formula

We are interested in solving the integral equation (4) to obtain an expression for the double-layer potential \phi, as this will give us a solution formula u(\mathbf{r}) for the Laplace Dirichlet problem. Ultimately, we accomplish this by using a clever trick. In an effort to make this trick seem more self-evident and less of a “rabbit out of a hat”, I want to draw an analogy to a seemingly unrelated problem: rank-one updates to linear systems of equations and the Sherman–Morrison formula.3In accordance with Stigler’s law of eponymy, the Sherman–Morrison formula was actually discovered by William J. Duncan five years before Sherman, Morrison, and Woodbury. For a more general perspective on the Sherman–Morrison formula and its generalization to the Sherman–Morrison–Woodbury formula, you may be interested in the following post of mine on Schur complements and block Gaussian elimination.

Suppose we want to solve the system of linear equations

(5)   \begin{equation*} (A + uv^\top) x = b, \end{equation*}

where A is an n\times n square matrix and u, v, and b are length-n vectors. We are ultimately interested in finding x from b. To gain insight into this problem, it will be helpful to first carefully considered the problem in reverse: computing b from x. We could, of course, perform this computation by forming the matrix A + uv^\top in memory and multiplying it with x, but there is a more economical way:

  1. Form \alpha = v^\top x.
  2. Compute b = \alpha u + Ax.

Standing back, observe that we now have a system of n+1 equations for unknowns x and \alpha. Specifically, our first equation can be rewritten as

    \begin{equation*} -\alpha + v^\top x = 0 \end{equation*}

which combined with the second equation

    \begin{equation*} \alpha u + Ax = b \end{equation*}

gives the n+1 by n+1 system4This “state space approach” of systematically writing out a matrix–vector multiply algorithm and then realizing this yields a larger system of linear equations was initially taught to be by my mentor Shiv Chandrasekaran; this approach has much more powerful uses, such as in the theory of rank-structured matrices.

(6)   \begin{equation*} \begin{bmatrix} - 1 & v^\top \\ u & A \end{bmatrix} \begin{bmatrix} \alpha \\ x \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}. \end{equation*}

The original equation for x (5) can be derived from the “lifted” equation (6) by applying Gaussian elimination and eliminating the first row of the linear system (6). But now that we have the lifted equation (6), one can naturally wonder what would happen if we instead used Gaussian elimination to eliminate the last n rows of (6); this will give us an equation for \alpha = v^\top x which we can solved without first computing x. Doing this so-called block Gaussian elimination yields

    \begin{equation*} (-1-v^\top A^{-1}u)\alpha = -v^\top A^{-1} b. \end{equation*}

Solving this, we deduce that

    \begin{equation*} \alpha = \frac{v^\top A^{-1} b}{1 + v^\top A^{-1}u}. \end{equation*}

From the equation \alpha u + Ax = b, we have that

    \begin{equation*} x = (A+uv^\top)^{-1}b = A^{-1}b - \alpha A^{-1} u = A^{-1}b - \frac{A^{-1}uv^\top A^{-1} b}{1 + v^\top A^{-1}u}. \end{equation*}

Since this formula holds for every vector b, we deduce the famous Sherman–Morrison formula

    \begin{equation*} (A+uv^\top)^{-1}= A^{-1} - \frac{A^{-1}uv^\top A^{-1}}{1 + v^\top A^{-1}u}. \end{equation*}

This example shows how it can be conceptually useful to lift a linear system of equations by adding additional variables and equations and then “do Gaussian elimination in a different order”. The same insight shall be useful in solving integral equations like (4).

Solving for the Double Layer Potential

Let’s try repeating the playbook we executed for the rank-one-updated linear system (5) and apply it to the integral equation (4). We are ultimately interested in computing \phi(\cdot) from u(\cdot) but, as we did last section, let’s first consider the reverse. To compute u(\cdot) from \phi(\cdot), we first evaluate the integral

    \begin{equation*} \alpha =  \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Substituting this into (4) gives the system of equations

(7)   \begin{equation*} \frac{1}{4\pi r} \alpha + \frac{1}{2} \phi(\mathbf{r})  = u(\mathbf{r}), \end{equation*}

(8)   \begin{equation*} -\alpha + \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}') = 0. \end{equation*}

In order to obtain (4) from (7) and (8), we add 1/4\pi r times equation (8) to equation (7). Following last section, we now instead eliminate \phi(\mathbf{r}) from equation (8) using equation (7). To do this, we need to integrate equation (7) in order to cancel the integral in equation (8):

    \begin{equation*} \frac{1}{4\pi r} \alpha \underbrace{\int_{\partial R} \, dS(\mathbf{r}')}_{=2\pi r} + \frac{1}{2} \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}')  = \frac{1}{2} \alpha + \frac{1}{2} \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}') = \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Adding 2 times this integrated equation to equation (8) yields

    \begin{equation*} 2\alpha = 2\int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}') \implies \alpha = \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Thus plugging this expression for \alpha into equation (7) yields

    \begin{equation*} \phi(\mathbf{r}) = 2u(\mathbf{r}) - \frac{1}{2\pi r} \alpha = 2u(\mathbf{r}) - \frac{1}{2\pi r} \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

We’ve solved for our double layer potential!

As promised, the double layer potential can be used to give a solution formula (known as the Poisson integral formula) for the Laplace Dirichet problem. The details are a mechanical, but also somewhat technical, exercise in vector calculus identities. We plug through the details in the following extra section.

Poisson Integral Formula
Let’s finish this up by using the double layer to derive a solution formula for the electric potential u(\mathbf{r}) at a point \mathbf{r} in the interior of the region. To do this, we use equation (3):

(9)   \begin{align*} u(\mathbf{r}) &= \frac{1}{2\pi} \int_{\partial R} \left( 2u(\mathbf{r}') - \frac{1}{2\pi r} \int_{\partial R} u(\mathbf{r}'') \, dS(\mathbf{r}'') \right) \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}') \\ &= \frac{1}{\pi} \int_{\partial R} u(\mathbf{r}')\frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}') - \frac{1}{4\pi^2 r} \int_{\partial R} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln\|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}')\int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{align*}

We now need to do a quick calculation which is somewhat technical and not particularly enlightening. We evaluate -\frac{1}{4\pi}\int_{\partial R} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \frac{1}{|\mathbf{r}-\mathbf{r}'|}  \, dS(\mathbf{r}') using the divergence theorem:

    \begin{align*} \frac{1}{2\pi}\int_{\partial R} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}') &= \frac{1}{2\pi}\int_{\partial R} \nu_{\mathbf{r}'}\cdot \nabla_{\mathbf{r}'} \ln \|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}') = \frac{1}{2\pi}\int_R \nabla_{\mathbf{r}'}^2 \ln\|\mathbf{r}-\mathbf{r}'\|  \, d\mathbf{r}' \\ &= \int_R \delta(\mathbf{r}'-\mathbf{r}) \, d\mathbf{r}' = 1. \end{align*}

We denote \nabla for the gradient, \nu_{\mathbf{r}'} for the normal vector to \partial R at the point \mathbf{r}', \cdot for the dot product, \nabla^2 for the Laplace operator, and \delta for the Dirac delta “function”. The last equality holds because the function v(\mathbf{r}') = \tfrac{1}{2\pi} \ln \| \mathbf{r} - \mathbf{r}'\| is a so-called fundamental solution for Laplace equation in the sense that \nabla_{\mathbf{r}'}^2 v(\mathbf{r}') = \delta(\mathbf{r}'-\mathbf{r}). Therefore, (9) simplifies to

    \begin{equation*} \mathbf{u}(\mathbf{r}) = \frac{1}{\pi} \int_{\partial R} u(\mathbf{r}')\frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}') - \frac{1}{2\pi r} \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Computing the boundary derivative for the spherical region centered at the origin with radius r, we obtain the formula

    \begin{equation*} \mathbf{u}(\mathbf{r}) = \frac{1}{2\pi r} \int_{\partial R} \left( 2\frac{r^2 - \mathbf{r} \cdot \mathbf{r}'}{\|\mathbf{r} - \mathbf{r}'\|^2} - 1 \right) u(\mathbf{r}') \, dS(\mathbf{r}') = \frac{r^2 - \| \mathbf{r} \|^2}{2\pi r} \int_{\partial R} \frac{u(\mathbf{r}')}{\|\mathbf{r} - \mathbf{r}'\|^2} \, dS(\mathbf{r}'). \end{equation*}

We’ve succeeded at deriving a solution formula for u(\mathbf{r}) for points \mathbf{r} in the interior of the disk in terms of u(\mathbf{r}') for points \mathbf{r}' on the boundary of the disk. This is known as the Poisson integral formula for the disk in two dimensions. This formula can be generalized to balls in higher dimensions, though this proof technique using “Sherman–Morrison” fails to work in more than two dimensions.

Sherman–Morrison for Integral Equations

Having achieved our main goal of deriving a solution formula for the 2D Laplace Dirichlet problem for a circular domain, I want to take a step back to present the approach from two sections ago in more generality. Consider a more general integral equation of the form

(10)   \begin{equation*} a \, \phi(\mathbf{x}) + \int_\Omega K(\mathbf{x},\mathbf{y}) \phi(\mathbf{y}) \, d\mathbf{y} = f(\mathbf{x}), \quad \mathbf{x} \in \Omega, \end{equation*}

where \Omega is some region in space, K(\cdot,\cdot), f(\cdot), and \phi(\cdot) are functions of one or two arguments on \Omega, and a\ne 0 is a nonzero constant. Such an integral equation is said to be of the second kind. The integral equation for the Laplace Dirichlet problem (2) is of this form with \Omega = \partial R, a = 1/2, K(\mathbf{x},\mathbf{y}) = \tfrac{\partial}{\partial \nu_{\mathbf{y}}} \ln \|\mathbf{x} - \mathbf{y}\|, and f(\mathbf{x}) = u(\mathbf{x}). We say the kernel K(\cdot,\cdot) is separable with rank k if K(\cdot,\cdot) can be expressed in the form

    \begin{equation*} K(\mathbf{x},\mathbf{y}) = g_1(\mathbf{x})h_1(\mathbf{y}) + g_2(\mathbf{x})h_2(\mathbf{y}) + \cdots + g_k(\mathbf{x})h_k(\mathbf{y}). \end{equation*}

With the circular domain, the Laplace Dirichlet integral equation (2) is separable with rank k = 1.5E.g., set g_1(\mathbf{x}) = 1 and h_1(\mathbf{y}) = 1/4\pi r. We shall focus on the second kind integral equation (10) assuming the kernel is separable with rank 1 (for simplicity, we set a = 1):

(11)   \begin{equation*} \phi(\mathbf{x}) + g(\mathbf{x}) \int_\Omega h(\mathbf{y}) \phi(\mathbf{y}) \, d\mathbf{y} = f(\mathbf{x}), \quad \mathbf{x} \in \Omega. \end{equation*}

Let’s try and write this equation in a way that’s more similar to the linear system of equation (5). To do this, we make use of linear operators defined on functions:

  • Let \operatorname{Id} denote the identity operator on functions: It takes as inputs function \phi(\cdot) and outputs the function \phi(\cdot) unchanged.
  • Let I_h denote the “integration against h operator”: It takes as input a function \phi(\cdot) and outputs the number \int_\Omega h(\mathbf{y})\phi(\mathbf{y}) \, d\mathbf{y}.

With these notations, equation (11) can be written as

    \begin{equation*} (\operatorname{Id} + g I_h) \phi = f. \end{equation*}

Using the same derivation which led to the Sherman–Morrison formula for linear systems of equations, we can apply the Sherman–Morrison formula to this integral equation in operator form, yielding

    \begin{equation*} \phi = \left( \operatorname{Id}^{-1} - \frac{\operatorname{Id}^{-1}g I_h \operatorname{Id}^{-1}}{1 + I_h \operatorname{Id}^{-1} g} \right)f = f - \frac{g (I_h f)}{1+I_h g}. \end{equation*}

Therefore, the solution to the integral equation (11) is

    \begin{equation*} \phi(\mathbf{x}) = f(\mathbf{x}) - \frac{\int_\Omega h(\mathbf{y}) f(\mathbf{y}) \, d\mathbf{y}}{1 + \int_\Omega h(\mathbf{y})g(\mathbf{y}) \, d\mathbf{y} } g(\mathbf{x}). \end{equation*}

This can be interpreted as a kind of Sherman–Morrison formula for the integral equation (11).

One can also generalize to provide a solution formula for the second-kind integral equation (10) for a separable kernel with rank k; in this case, the natural matrix analog is now the Sherman–Morrison–Woodbury identity rather than the Sherman–Morrison formula. Note that this solution formula requires the solution of a k\times k system of linear equations. One can use this as a numerical method to solve second-kind integral equations: First, we approximate K by a separable kernel of a modest rank k and then compute the exact solution of the resulting integral equation with the approximate kernel.6A natural question is why one might want to solve an integral equation formulation of a partial differential equations like the Laplace or Helmholtz equation. An answer is that that formulations based on second-kind integral equations tend to lead to systems of linear equations which much more well-conditioned as compared to other methods like the finite element method. They have a number of computational difficulties as well, as the resulting linear systems of equations are dense and may require elaborate quadrature rules to accurately compute.

My goal in writing this post was to discuss two topics which are both near and dear to my heart, integral equations and the Sherman–Morrison formula. I find the interplay of these two ideas to be highly suggestive. It illustrates the power of the analogy between infinite-dimensional linear equations, like differential and integral equations, and finite-dimensional ones, which are described by matrices. Infinite dimensions certainly do have their peculiarities and technical challenges, but it can be very useful to first pretend infinite-dimensional linear operators (like integral operators) are matrices, do calculations to derive some result, and then justify these computations rigorously post hoc.7The utility of this technique is somewhat of an open secret among some subset of mathematicians and scientists, but such heuristics are usually not communicated to students explicitly, at least in rigorous mathematics classes.