Low-Rank Approximation Toolbox: Analysis of the Randomized SVD

In the previous post, we looked at the randomized SVD. In this post, we continue this discussion by looking at the analysis of the randomized SVD. Our approach is adapted from a new analysis of the randomized SVD by Joel A. Tropp and Robert J. Webber.

There are many types of analysis one can do for the randomized SVD. For instance, letting B be a matrix and X the rank-k output of the randomized SVD, natural questions include:

  • What is the expected error \mathbb{E} \norm{X-B} measured in some norm \norm{\cdot}? What about expected squared error \mathbb{E} \left\|X-B\right\|^2? How do these answers change with different norms?
  • How large can the randomized SVD error \norm{X - B} get except for some small failure probability \delta?
  • How close is the randomized SVD truncated to rank \rho compared to the best rank-\rho approximation to B?
  • How close are the singular values and vectors of the randomized SVD approximation X compared to those of B?

Implicitly and explicitly, the analysis of Tropp and Webber provides satisfactory answers to a number of these questions. In the interest of simplifying the presentation, we shall focus our presentation on just one of these questions, proving following result:

    \[\mathbb{E} \left\|B-X\right\|_{\rm F}^2 \le \left( 1 + \frac{k}{k-(r+1)}\right) \left\|B - \lowrank{B}_r \right\|_{\rm F}^2 \quad \text{for every $r \le k-2$}.\]

Here, \left\|\cdot\right\|_{\rm F} is the Frobenius norm. We encourage the interested reader to check out Tropp and Webber’s paper to see the methodology we summarize here used to prove numerous facts about the randomized SVD and its extensions using subspace iteration and block Krylov iteration.

Projection Formula

Let us recall the randomized SVD as we presented it in last post:

  1. Collect information. Form Y = B\Omega where \Omega is an appropriately chosen n\times k random matrix.
  2. Orthogonalize. Compute an orthonormal basis Q for the column space of Y by, e.g., thin QR factorization.
  3. Project. Form C \coloneqq Q^*B, where {}^* denotes the conjugate transpose.

The randomized SVD is the approximation

    \[B\approx X \coloneqq QC.\]

It is easy to upgrade X = QC into a compact SVD form X = U\Sigma V^*, as we did as steps 4 and 5 in the previous post.

For the analysis of the randomized SVD, it is helpful to notice that the approximation X takes the form

    \[X = QC = QQ^*B = \Pi_{B\Omega} B.\]

Here, \Pi_{B\Omega} denotes the orthogonal projection onto the column space of the matrix B\Omega. We call X = \Pi_{B\Omega} B the projection formula for the randomized SVD.1This formula bares a good deal of similarity to the projection formula for the Nyström approximation. This is not a coincidence.

Aside: Quadratic Unitarily Invariant Norms

To state our error bounds in the most general language, we can adopt the language of quadratic unitarily invariant norms. Recall that a square matrix U is unitary if

    \[U^*U = UU^* = I.\]

You may recall from my post on Nyström approximation that a norm \left\|\cdot\right\|_{\rm UI} on matrices is unitarily invariant if

    \[\left\|UBV\right\|_{\rm UI} = \left\|B\right\|_{\rm UI} \quad \text{for all unitary matrices $U$, $V$ and any matrix $B$}.\]

A unitarily invariant norm \left\|\cdot\right\|_{\rm Q} is said to be quadratic if there exists another unitarily invariant norm \left\|\cdot\right\|_{\rm UI} such that

(1)   \[\left\|B\right\|_{\rm Q}^2 = \left\|B^*B\right\|_{\rm UI} = \left\|BB^*\right\|_{\rm UI} \quad \text{for every matrix $B$.} \]

Many examples of quadratic unitarily invariant norms are found among the Schatten p-norms, defined as

    \[\left\|B\right\|_{S_p}^p \coloneqq \sum_i \sigma_i(B)^p.\]

Here, \sigma_1(B) \ge \sigma_2(B) \ge \cdots denote the decreasingly order singular values of B. The Schatten 2-norm \left\|\cdot\right\|_{S_2} is the Frobenius norm of a matrix. The spectral norm (i.e., operator 2-norm) is the Schatten \infty-norm, defined to be

    \[\norm{B} = \left\|B\right\|_{S_\infty} \coloneqq \max_i \sigma_i(B) = \sigma_1(B).\]

The Schatten p-norms are unitarily invariant norms for every 1\le p \le \infty. However, the Schatten p-norms are quadratic unitarily invariant norms only for 2\le p \le \infty since

    \[\left\|B\right\|_{S_p}^2 = \left\|B^*B\right\|_{S_{p/2}}.\]

For the remainder of this post, we let \left\|\cdot\right\|_{\rm Q} and \left\|\cdot\right\|_{\rm UI} be a quadratic unitarily invariant norm pair satisfying (1).

Error Decomposition

The starting point of our analysis is the following decomposition of the error of the randomized SVD:

(2)   \[\left\|B - X\right\|_{\rm Q}^2 \le \left\|B - \lowrank{B\right\|_r}_{\rm Q}^2 + \left\|\lowrank{B\right\|_r - \Pi_{B\Omega} \lowrank{B}_r}_{\rm Q}^2. \]

Recall that we have defined \lowrank{B}_r to be an optimal rank-r approximation to B:

    \[\left\|B - \lowrank{B\right\|_r}_{\rm Q} \le \left\|B - C\right\|_{\rm Q} \quad \text{for any rank-$r$ matrix $C$}.\]

Thus, the error decomposition says that the excess error is bounded by

    \[\left\|B - X\right\|_{\rm Q}^2 - \left\|B - \lowrank{B\right\|_r}_{\rm Q}^2 \le \left\|\lowrank{B\right\|_r - \Pi_{B\Omega} \lowrank{B}_r}_{\rm Q}^2.\]

Using the projection formula, we can prove the error decomposition (2) directly:

    \begin{align*}\left\|B - X\right\|_{\rm Q}^2 &= \left\|B - \Pi_{B\Omega} B \right\|_{\rm Q}^2\\&= \norm{(I-\Pi_{B\Omega})BB^*(I-\Pi_{B\Omega})}_{\rm UI}  \\&= \norm{(I-\Pi_{B\Omega})(BB^* - \lowrank{B}_r\lowrank{B}_r^*)(I-\Pi_{B\Omega})}_{\rm UI} \\&\qquad + \norm{(I-\Pi_{B\Omega})\lowrank{B}_r\lowrank{B}_r^*(I-\Pi_{B\Omega})}_{\rm UI}\\&\le \left\|BB^* - \lowrank{B}_r\lowrank{B}_r^*\right|_{\rm UI} + \left\|\lowrank{B}_r - \Pi_{B\Omega} \lowrank{B}_r\right\|_{\rm Q}^2 \\&\le \left\|B - \lowrank{B}_r \right\|_{\rm Q}^2 + \left\|\lowrank{B}_r- \Pi_{B\Omega} \lowrank{B}_r\right\|_{\rm Q}^2.\end{align*}

The first line is the projection formula, the second is relation between \norm{\cdot}_{\rm Q} and \norm{\cdot}_{\rm UI}, and the third is the triangle inequality. For the fifth line, we use the fact that I - \Pi_{B\Omega} is an orthoprojector and thus has unit spectral norm \norm{I - \Pi_{B\Omega}} together with the fact that

    \[\left\|BCD\right\|_{\rm UI} \le \norm{B} \cdot \left\|C\right\|_{\rm UI}\cdot \norm{D} \quad \text{for all matrices $B,C,D$.}\]

For the final inequality, we used commutation rule

    \[(B - \lowrank{B}_r)(B - \lowrank{B}_r)^* = BB^* - \lowrank{B}_r\lowrank{B}_r^*.\]

Bounding the Error

In this section, we shall continue to refine the error decomposition (2). Consider a thin SVD of B, partitioned as

    \[B = \onebytwo{U_1}{U_2} \twobytwo{\Sigma_1}{0}{0}{\Sigma_2}\onebytwo{V_1}{V_2}^*\]

where

  • \onebytwo{U_1}{U_2} and \onebytwo{V_1}{V_2} both have orthonormal columns,
  • \Sigma_1 and \Sigma_2 are square diagonal matrices with nonnegative entries,
  • U_1 and V_1 have r columns, and
  • \Sigma_1 is an r\times r matrix.

Under this notation, the best rank-r approximation is \lowrank{B}_r = U_1\Sigma_1V_1^*. Define

    \[\twobyone{\Omega_1}{\Omega_2} \coloneqq \twobyone{V_1^*}{V_2^*} \Omega.\]

We assume throughout that \Omega_1 is full-rank (i.e., \rank \Omega_1 = r).

Applying the error decomposition (2), we get

(3)   \begin{align*}\left\|B - X\right\|_{\rm Q}^2&\le \norm{ \onebytwo{U_1}{U_2} \twobytwo{0}{0}{0}{\Sigma_2}\onebytwo{V_1}{V_2}^*}_{\rm Q}^2 + \norm{(I-\Pi_{B\Omega})U_1\Sigma_1V_1^*}_{\rm Q}^2 \\&= \left\| \Sigma_2 \right\|_{\rm Q}^2 + \norm{(I-\Pi_{B\Omega})U_1\Sigma_1}_{\rm Q}^2. \end{align*}

For the second line, we used the unitary invariance of \left\|\cdot\right\|_{\rm Q}. Observe that the column space of B\Omega is a subset of the column space of B\Omega\Omega_1^\dagger so

(4)   \[\norm{(I-\Pi_{B\Omega})U_1\Sigma_1}_{\rm Q}^2 \le \norm{(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1}_{\rm Q}^2 = \norm{\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1}_{\rm UI}. \]

Let’s work on simplifying the expression

    \[\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1.\]

First, observe that

    \[B\Omega\Omega_1^\dagger = B\onebytwo{V_1}{V_2}\twobyone{\Omega_1}{\Omega_2}\Omega_1^\dagger = \onebytwo{U_1}{U_2} \twobytwo{\Sigma_1}{0}{0}{\Sigma_2} \twobyone{I}{\Omega_2\Omega_1^\dagger} = U_1\Sigma_1 + U_2\Sigma_2\Omega_2\Omega_1^\dagger.\]

Thus, the projector \Pi_{B\Omega\Omega_1^\dagger} takes the form

    \begin{align*}\Pi_{B\Omega\Omega_1^\dagger} &= (B\Omega\Omega_1^\dagger) \left[(B\Omega\Omega_1^\dagger)^*(B\Omega\Omega_1^\dagger)\right]^\dagger (B\Omega\Omega_1^\dagger)^* \\&= (U_1\Sigma_1 + U_2\Sigma_2\Omega_2\Omega_1^\dagger) \left[\Sigma_1^2 + (\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)\right]^\dagger (U_1\Sigma_1 + U_2\Sigma_2\Omega_2\Omega_1^\dagger)^*.\end{align*}

Here, {}^\dagger denotes the Moore–Penrose pseudoinverse, which reduces to the ordinary matrix inverse for a square nonsingular matrix. Thus,

(5)   \[\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1 = \Sigma_1^2 - \Sigma_1^2 \left[\Sigma_1^2 + (\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)\right]^\dagger\Sigma_1^2. \]

Remarkably, this seemingly convoluted combination of matrices actually is a well-studied operation in matrix theory, the parallel sum. The parallel sum of positive semidefinite matrices A and H is defined as

(6)   \[A : H \coloneqq A - A(A+H)^\dagger A. \]

We will have much more to say about the parallel sum soon. For now, we use this notation to rewrite (5) as

    \[\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1 = \Sigma_1^2 : [(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)].\]

Plugging this back into (4) and then (3), we obtain the error bound

(7)   \[\left\|B - X\right\|_{\rm Q}^2\le \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1^2 : [(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)]\right\|_{\rm UI}. \]

Simplifying this expression further will require more knowledge about parallel sums, which we shall discuss now.

Aside: Parallel Sums

Let us put aside the randomized SVD for a moment and digress to the seemingly unrelated topic of electrical circuits. Suppose we have a battery of voltage v and we connect the ends using a wire of resistance a. The current \mathrm{curr}_a is given by Ohm’s law

    \[v = a \cdot \mathrm{curr}_a.\]

Similarly, if the wire is replaced by a different wire with resistance h, the current \mathrm{curr}_h is then

    \[v = h \cdot \mathrm{curr}_h.\]

Now comes the interesting question. What if we connect the battery by both wires (resistances a and h) simultaneously in parallel, so that current can flow through either wire? Since wires of resistances a and h still have voltage v, Ohm’s law still applies and thus the total current is

    \[\mathrm{curr}_{\rm total} = \mathrm{curr}_a + \mathrm{curr}_h = \frac{v}{a} + \frac{v}{h} = v \left(a^{-1}+h^{-1}\right).\]

The net effect of connecting resisting wires of resistances a and h is the same as a single resisting wire of resistance

(8)   \[a:h \coloneqq \frac{v}{\mathrm{curr}_{\rm total}} = \left( a^{-1}+h^{-1}\right)^{-1}. \]

We call the the operation a \mathbin{:} h the parallel sum of a and h because it describes how resistances combine when connected in parallel.

The parallel sum operation : can be extended to all nonnegative numbers a and h by continuity:

    \[a:h \coloneqq \lim_{b\downarrow a, \: k\downarrow h} b:k = \begin{cases}\left( a^{-1}+h^{-1}\right)^{-1}, & a,h > 0 ,\\0, & \textrm{otherwise}.\end{cases}\]

Electrically, this definition states that one obtains a short circuit (resistance 0) if either of the wires carries zero resistance.

Parallel sums of matrices were introduced by William N. Anderson, Jr. and Richard J. Duffin as a generalization of the parallel sum operation from electrical circuits. There are several equivalent definitions. The natural definition (8) applies for positive definite matrices

    \[A:H = (A^{-1}+H^{-1})^{-1} \quad \text{for $A,H$ positive definite.}\]

This definition can be extended to positive semidefinite matrices by continuity. It can be useful to have an explicit definition which applies even to positive semidefinite matrices. To do so, observe that the (scalar) parallel sum satisfies the identity

    \[a:h = \frac{1}{a^{-1}+h^{-1}} = \frac{ah}{a+h} = \frac{a(a+h)-a^2}{a+h} = a - a(a+h)^{-1}a.\]

To extend to matrices, we capitalize the letters and use the Moore–Penrose pseudoinverse in place of the inverse:

    \[A : H = A - A(A+H)^\dagger A.\]

This was the definition (6) of the parallel sum we gave above which we found naturally in our randomized SVD error bounds.

The parallel sum enjoys a number of beautiful properties, some of which we list here:

  1. Symmetry. A:H = H:A,
  2. Simplified formula. A:H = (A^{-1}+H^{-1})^{-1} for positive definite matrices,
  3. Bounds. A:H \preceq A and A:H \preceq H.
  4. Monotone. A\mapsto (A:H) is monotone in the Loewner order.
  5. Concavity. The map (A,H) \mapsto A:H is (jointly) concave (with respect to the Loewner order).
  6. Conjugation. For any square C, C^*(A:H)C = (C^*AC):(C^*HC).
  7. Trace. \tr(A:H) \le (\tr A):(\tr H).2In fact, this property holds with the trace replaced by any positive linear map. That is, if \Phi is a linear map from square matrices to square matrices satisfying \Phi(A) \succeq 0 if A\succeq 0, then \Phi(A:H) \preceq \Phi(A):\Phi(H).
  8. Unitarily invariant norms. \left\|A:H\right\|_{\rm UI} \le \left\|A\right\|_{\rm UI} : \left\|H\right\|_{\rm UI}.

For completionists, we include a proof of the last property for reference.

Proof of Property 8
Let \left\|\cdot\right\|_{\rm UI}' be the unitarily invariant norm dual to \left\|\cdot\right\|_{\rm UI}.3Dual with respect to the Frobenius inner product, that is. By duality, we have

    \begin{align*}\left\|A:H\right\|_{\rm UI} &= \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr((A:H)M) \\&= \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr(M^{1/2}(A:H)M^{1/2})\\&= \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr((M^{1/2}AM^{1/2}):(M^{1/2}HM^{1/2}))  \\&\le \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \left[\tr(M^{1/2}AM^{1/2}):\tr(M^{1/2}HM^{1/2}) \right] \\&\le \left[\max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr(M^{1/2}AM^{1/2})\right]:\left[\max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr(M^{1/2}HM^{1/2})\right]\\&= \left\|A\right\|_{\rm UI} : \left\|H\right\|_{\rm UI}.\end{align*}


The first line is duality, the second holds because M\succeq 0, the third line is property 6, the fourth line is property 7, the fifth line is property 4, and the sixth line is duality.

Nearing the Finish: Enter the Randomness

Equipped with knowledge about parallel sums, we are now prepared to return to bounding the error of the randomized SVD. Apply property 8 of the parallel sum to the randomized SVD bound (7) to obtain

    \begin{align*}\left\|B - X\right\|_{\rm Q}^2&\le \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1^2 : [(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)]\right\|_{\rm UI} \\&\le \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1^2\right\|_{\rm UI} : \left\|[(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)]\right\|_{\rm UI} \\&= \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1\right\|_{\rm Q}^2 : \left\|\Sigma_2\Omega_2\Omega_1^\dagger\right\|^2_{\rm Q}.\end{align*}

This bound is totally deterministic: It holds for any choice of test matrix \Omega, random or otherwise. We can use this bound to analyze the randomized SVD in many different ways for different kinds of random (or non-random) matrices \Omega. Tropp and Webber provide a number of examples instantiating this general bound in different ways.

We shall present only the simplest error bound for randomized SVD. We make the following assumptions:

  • The test matrix \Omega is standard Gaussian.4By which, we mean the entries of \Omega are independent and standard normal.
  • The norm \left\|\cdot\right\|_{\rm Q} is the Frobenius norm \left\|\cdot\right\|_{\rm F}.
  • The randomized SVD rank k is \ge r-2.

Under these assumptions and using the property a : h \le h, we obtain the following expression for the expected error of the randomized SVD

(9)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2\le \left\| B - \lowrank{B\right\|_r }_{\rm F}^2 + \mathbb{E}\left\|\Sigma_2\Omega_2\Omega_1^\dagger\right\|^2_{\rm F}. \]

All that is left to is compute the expectation of \left\|\Sigma_2\Omega_2\Omega_1^\dagger\right\|_{\rm F}^2. Remarkably, this can be done in closed form.

Aside: Expectation of Gaussian–Inverse Gaussian Matrix Product

The final ingredient in our randomized SVD analysis is the following computation involving Gaussian random matrices. Let G and \Gamma be independent standard Gaussian random matrices with \Gamma of size r\times k, k\ge r-2. Then

    \[\mathbb{E} \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 = \frac{1}{k-(r+1)}\left\|S\right\|_{\rm F}^2\left\|R\right\|_{\rm F}^2.\]

For the probability lovers, we will now go on to prove this. For those who are less probabilistically inclined, this result can be treated as a black box.

Proof of Random Matrix Formula
To prove this, first consider a simpler case where \Gamma^\dagger does not appear:

    \[\mathbb{E} \left\|SGW\right\|_{\rm F}^2 = \mathbb{E} \sum_{ij} \left(\sum_{k\ell} s_{ik}g_{k\ell}w_{\ell j} \right)^2 = \mathbb{E} \sum_{ijk\ell} s_{ik}^2 w_{\ell j}^2 = \left\|S\right\|_{\rm F}^2\left\|W\right\|_{\rm F}^2.\]



Here, we used the fact that the entries g_{k\ell} are independent, mean-zero, and variance-one. Thus, applying this result conditionally on \Gamma with W = \Gamma^\dagger R, we get

(10)   \[\mathbb{E} \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 =\mathbb{E}\left[ \mathbb{E}\left[ \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 \,\middle|\, \Gamma\right]\right] =\left\|S\right\|_{\rm F}^2\cdot \mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2. \]



To compute \mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2, we rewrite using the trace

    \[\mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2 = \mathbb{E} \tr \left(\Gamma^\dagger RR^* \Gamma^{*\dagger} \right)= \tr \left(RR^* \mathbb{E}[\Gamma^{*\dagger}\Gamma^\dagger] \right) = \tr \left(RR^* \mathbb{E}[(\Gamma\Gamma^*)^{-1}] \right).\]



The matrix (\Gamma\Gamma^*)^{-1} is known as an inverse-Wishart matrix and is a well-studied random matrix model. In particular, its expectation is known to be (k-(r+1))^{-1}I. Thus, we obtain

    \[\mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2 = \tr \left(RR^* \mathbb{E}[(\Gamma\Gamma^*)^{-1}] \right) = \frac{\tr \left(RR^* \right)}{k-(r+1)} = \frac{\left\|R\right\|_{\rm F}^2}{k-(r+1)}.\]



Plugging into (10), we obtain the desired conclusion

    \[\mathbb{E} \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 =\left\|S\right\|_{\rm F}^2\cdot \mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2 = \frac{1}{k-(r+1)}\left\|S\right\|_{\rm F}^2\left\|R\right\|_{\rm F}^2.\]



This completes the proof.

Wrapping it All Up

We’re now within striking distance. All that is left is to assemble the pieces we have been working with to obtain a final bound for the randomized SVD.

Recall we have chosen the random matrix \Omega to be standard Gaussian. By the rotational invariance of the Gaussian distribution, \Omega_1 and \Omega_2 are independent and standard Gaussian as well. Plugging the matrix expectation bound to (9) then completes the analysis

    \begin{align*}\mathbb{E} \left\|B - X\right\|_{\rm F}^2&\le \left\| B - \lowrank{B\right\|_r }_{\rm F}^2 + \mathbb{E}\norm{\Sigma_2\Omega_2\Omega_1^\dagger I_{k\times k}}^2_{\rm F} \\&= \left\| B - \lowrank{B\right\|_r }_{\rm F}^2 + \frac{\left\|\Sigma_2\right\|_{\rm F}^2\norm{I_{k\times k}}_{\rm F}^2}{k-(r+1)} \\&= \left(1 + \frac{k}{k-(r+1)}\right) \left\| B - \lowrank{B\right\|_r }_{\rm F}^2.\end{align*}

Low-Rank Approximation Toolbox: Randomized SVD

The randomized SVD and its relatives are workhorse algorithms for low-rank approximation. In this post, I hope to provide a fairly brief introduction to this useful method. In the following post, we’ll look into its analysis.

The randomized SVD is dead-simple to state: To approximate an m\times n matrix B by a rank-k matrix, perform the following steps:

  1. Collect information. Form Y = B\Omega where \Omega is an appropriately chosen n\times k random matrix.
  2. Orthogonalize. Compute an orthonormal basis Q for the column space of Y by, e.g., thin QR factorization.
  3. Project. Form C \coloneqq Q^*B. (Here, {}^* denotes the conjugate transpose of a complex matrix, which reduces to the ordinary transpose if the matrix is real.)

If all one cares about is a low-rank approximation to the matrix B, one can stop the randomized SVD here, having obtained the approximation

    \[B \approx X \coloneqq Q C.\]

As the name “randomized SVD” suggests, one can easily “upgrade” the approximation X = QC to a compact SVD format:

  1. SVD. Compute a compact SVD of the matrix C: C = \hat{U}\Sigma V^* where \hat{U} and \Sigma and k\times k and V is n\times k.
  2. Clean up. Set U \coloneqq Q\hat{U}.

We now have approximated B by a rank-k matrix X in compact SVD form:

    \[B \approx X = QC = U\Sigma V^*.\]

One can use the factors U, \Sigma, and V of the approximation X \approx B as estimates of the singular vectors and values of the matrix B.

What Can You Do with the Randomized SVD?

The randomized SVD has many uses. Pretty much anywhere you would ordinarily use an SVD is a candidate for the randomized SVD. Applications include model reduction, fast direct solvers, computational biology, uncertainty quantification, among numerous others.

To speak in broad generalities, there are two ways to use the randomized SVD:

  • As an approximation. First, we could use X as a compressed approximation to B. Using the format X = QC, X requires just (m+n)k numbers of storage, whereas B requires a much larger mn numbers of storage. As I detailed at length in my blog post on low-rank matrices, many operations are cheap for the low-rank matrix X. For instance, we can compute a matrix–vector product Xz in roughly (m+n)k operations rather than the mn operations to compute Bz. For these use cases, we don’t need the “SVD” part of the randomized SVD.
  • As an approximate SVD. Second, we can actually use the U, \Sigma, and V produced by the randomized SVD as approximations to the singular vectors and values of B. In these applications, we should be careful. Just because X is a good approximation to B, it is not necessarily the case that U, \Sigma, and V are close to the singular vectors and values of B. To use the randomized SVD in this context, it is safest to use posterior diagnostics (such as the ones developed in this paper of mine) to ensure that the singular values/vectors of interest are computed to a high enough accuracy. A useful rule of thumb is the smallest two to five singular values and vectors computed by the randomized SVD are suspect and should be used in applications only with extreme caution. When the appropriate care is taken and for certain problems, the randomized SVD can provide accurate singular vectors far faster than direct methods.

Accuracy of the Randomized SVD

How accurate is the approximation X produced by the randomized SVD? No rank-k approximation can be more accurate than the best rank-k approximation \lowrank{B}_k to the matrix B, furnished by an exact SVD of B truncated to rank k. Thus, we’re interested in how much larger B - X is than B - \lowrank{B}_k.

Frobenius Norm Error

A representative result describing the error of the randomized SVD is the following:

(1)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2 \le \min_{r \le k-2} \left( 1 + \frac{r}{k-(r+1)} \right) \left\|B - \lowrank{B}_r\right\|^2_{\rm F}. \]

This result states that the average squared Frobenius-norm error of the randomized SVD is comparable with the best rank-r approximation error for any r\le k-2. In particular, choosing k = r+2, we see that the randomized SVD error is at most (1+r) times the best rank-r approximation

(2)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2 \le (1+r) \left\|B - \lowrank{B}_r \right\|^2_{\rm F} \quad \text{for $k=r+2$}. \]

Choosing k = 2r+1, we see that the randomized SVD has at most twice the error of the best rank-r approximation

(3)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2 \le 2 \left\|B - \lowrank{B}_r\right\|^2_{\rm F} \quad \text{for $k=2r+1$}. \]

In effect, these results tell us that if we want an approximation that is nearly as good as the best rank-r approximation, using an approximation rank of, say, k = r+2 or k = 2r for the randomized SVD suffices. These results hold even for worst-case matrices. For nice matrices with steadily decaying singular values, the randomized SVD can perform even better than equations (2)–(3) would suggest.

Spectral Norm Error

The spectral norm is often a better error metric for matrix computations than the Frobenius norm. Is the randomized SVD also comparable with the best rank-k approximation when we use the spectral norm? In this setting, a representative error bound is

(4)   \[\mathbb{E} \left\|B - X\right\|^2 \le \min_{r \le k-2} \left( 1 + \frac{2r}{k-(r+1)} \right) \left(\left\|B - \lowrank{B}_r \right\|^2 + \frac{\mathrm{e}^2}{k-r} \left\|B - \lowrank{B}_r\right\|^2_{\rm F} \right). \]

The spectral norm error of the randomized SVD depends on the Frobenius norm error of the best rank-r approximation for r \le k-2.

Recall that the spectral and Frobenius norm can be defined in terms of the singular values of the matrix B:

    \[\norm{B} = \sigma_1(B),\quad \left\|B\right\|_{\rm F}^2 = \sum_i \sigma_i(B)^2.\]

Rewriting (4) using these formulas, we get

    \[\mathbb{E} \left\|B - X\right\|^2 \le \min_{r \le k-2} \left( 1 + \frac{2r}{k-(r+1)} \right) \left(\sigma_{r+1}^2 + \frac{\mathrm{e}^2}{k-r} \sum_{i>r} \sigma_i^2 \right).\]

Despite being interested in the largest singular value of the error matrix B-X, this bound demonstrates that the randomized SVD incurs errors based on the entire tail of B‘s singular values \sum_{i>r}\sigma_i^2. The randomized SVD is much worse than the best rank-k approximation for a matrix B with a long detail of slowly declining singular values.

Improving the Approximation: Subspace Iteration and Block Krylov Iteration

We saw that the randomized SVD produces nearly optimal low-rank approximations when we measure using the Frobenius norm. When we measure using the spectral norm, we have a split decision: If the singular values decay rapidly, the randomized SVD is near-optimal; if the singular values decay more slowly, the randomized SVD can suffer from large errors.

Fortunately, there are two fixes to improve the randomized SVD in the presence of slowly decaying singular values:

  • Subspace iteration: To improve the quality of the randomized SVD, we can combine it with the famous power method for eigenvalue computations. Instead of Y = B\Omega, we use the powered expression Y = B(B^*B)^q\Omega. Powering has the effect of amplifying the large singular values of B and dimishing the influence of the tail.
  • Block Krylov iteration: Subspace iteration is effective, but fairly wasteful. To compute B(B^*B)^q\Omega we compute the block Krylov sequence

        \[B\Omega, BB^*B\Omega, BB^*BB^*B\Omega,\ldots,B(B^*B)^q\Omega\]

    and throw away all but the last term. To make more economical use of resources, we can use the entire sequence as our information matrix Y:

        \[Y = \begin{bmatrix} B\Omega & BB^*B\Omega & BB^*BB^*B\Omega& \cdots & B(B^*B)^q\Omega\end{bmatrix}.\]

    Storing the entire block Krylov sequence is more memory-intensive but makes more efficient use of matrix products than subspace iteration.

Both subspace iteration and block Krylov iteration should be carefully implemented to produce accurate results.

Both subspace iteration and block Krylov iteration diminish the effect of a slowly decaying tail of singular values on the accuracy of the randomized SVD. The more slowly the tail decays, the larger one must take the number of iterations q to obtain accurate results.

The Hard Way to Prove Jensen’s Inequality

In this post, I want to discuss a very beautiful piece of mathematics I stumbled upon recently. As a warning, this post will be more mathematical than most, but I will still try and sand off the roughest mathematical edges. This post is adapted from a much more comprehensive post by Paata Ivanishvili. My goal is to distill the main idea to its essence, deferring the stochastic calculus until it cannot be avoided.

Jensen’s inequality is one of the most important results in probability.

Jensen’s inequality. Let X be a (real) random variable and f:\real\to\real a convex function such that both \mathbb{E} X and \mathbb{E} f(X) are defined. Then f(\mathbb{E}X) \le \mathbb{E} [f(X)].

Here is the standard proof. A convex function has supporting lines. That is, at a point a \in \real, there exists a slope m such that m(x-a) + f(a) \le f(x) for all x \in \real. Invoke this result at a = \mathbb{E} X and x = X and take expectations to conclude that

    \[\mathbb{E}[m(X - \mathbb{E}X) + f(\mathbb{E}X)] = f(\mathbb{E}X) \le \mathbb{E} [f(X)].\]

In this post, I will outline a proof of Jensen’s inequality which is much longer and more complicated. Why do this? This more difficult proof illustrates an incredible powerful technique for proving inequalities, interpolation. The interpolation method can be used to prove a number of difficult and useful inequalities in probability theory and beyond. As an example, at the end of this post, we will see the Gaussian Jensen inequality, a striking generalization of Jensen’s inequality with many applications.

The idea of interpolation is as follows: Suppose I wish to prove A_0 \le A_1 for two numbers A_0 and A_1. This may hard to do directly. With the interpolation method, I first construct a family of numbers A_t, 0 \le t \le 1, such that A_{t = 0} = A_0 and A_{t=1} = A_1 and show that (A_t : 0\le t\le 1) is (weakly) increasing in t. This is typically accomplished by showing the derivative is nonnegative:

    \[\frac{d}{dt} A_t \ge 0.\]

To prove Jensen’s inequality by interpolation, we shall begin with a special case. As often in probability, the simplest case is that of a Gaussian random variable.

Jensen’s inequality for a Gaussian. Let X be a standard Gaussian random variable (i.e., mean-zero and variance 1) and let f:\real\to\real be a thrice-differentiable convex function satisfying a certain technical condition.1Specifically, we assume the regularity condition \mathbb{E} (f'''(G))^p < +\infty for some p > 1 for any Gaussian random variable G. Then

    \[f(0) \le \mathbb{E} [f(X)].\]

Note that the conclusion is exactly Jensen’s inequality, as we have assumed X is mean-zero.

The difficulty with any proof by interpolation is to come up with the “right” A_t. For us, the “right” answer will take the form

    \[A_t = \mathbb{E} [ f(X_t) ],\]

where X_0 = 0 starts with no randomness and X_1 = X is our standard Gaussian. To interpolate between these extremes, we increase the variance linearly from 0 to 1. Thus, we define

    \[A_t = \mathbb{E} [ f(X_t)] \quad \text{where $X_t\sim\mathcal{N}(0,t)$}.\]

Here, and throughout, \mathcal{N}(0,v) denotes a Gaussian random variable with zero mean and variance v.

Let’s compute the derivative of A_t. To do this, let \delta > 0 denote a small parameter which we will later send to zero. For us, the key fact will be that a \mathcal{N}(0,t+\delta) can be realized as a sum of independent \mathcal{N}(0,t) and \mathcal{N}(0,\delta) random variables. Therefore, we write

    \[X_{t+\delta} = X_t + \Delta \quad \text{where $\Delta \sim \mathcal{N}(0,\delta)$ is independent of $X_t$.}\]

We now evaluate f(X_t+\Delta) by using Taylor’s formula

(1)   \[f(X_t+\Delta) = f(X_t) + f'(X_t)\Delta + \frac{1}{2} f''(X_t) \Delta^2 + \frac{1}{6} f'''(\xi) \Delta^3, \]

where \xi lies between X_t and X_t+\Delta. Now, take expectations,

    \[\mathbb{E}[ f(X_t+\Delta)]=\mathbb{E}[f(X_t)] + \mathbb{E}[f'(X_t)\Delta] + \frac{1}{2} \mathbb{E}[f''(X_t)] \mathbb{E}[\Delta^2] + \underbrace{\frac{1}{6} \mathbb{E}[f'''(\xi) \Delta^3]}_{:=\mathrm{Rem}(\delta)}.\]

The random variable \Delta has mean zero and variance \delta so this gives

    \[\mathbb{E} [f(X_t+\Delta)]=\mathbb{E}[f(X_t)] + \delta \frac{1}{2} \mathbb{E}[f''(X_t)]  + \mathrm{Rem}(\delta).\]

As we show below, the remainder term \mathrm{Rem}(\delta)/\delta vanishes as \delta\to 0. Thus, we can rearrange this expression to compute the derivative:

    \[\frac{d}{dt} A_t = \lim_{\delta \downarrow 0} \frac{\mathbb{E} f(X_t+\Delta)-\mathbb{E}[f(X_t)]}{\delta} = \lim_{\delta \downarrow 0} \frac{1}{2} \mathbb{E}[f''(X_t)] + \frac{\mathrm{Rem}(\delta)}{\delta} =  \frac{1}{2} \mathbb{E}[f''(X_t)].\]

The second derivative of a convex function is nonnegative: f''(x) \ge 0 for every x. Therefore,

    \[\frac{d}{dt} A_t \ge 0 \quad \text{for all } t\in [0,1].\]

Jensen’s inequality is proven! In fact, we’ve proven the stronger version of Jensen’s inequality:

    \[\mathbb{E} f(X) = f(0) + \frac{1}{2} \int_0^1 \mathbb{E} [f''(X_t)] \, dt.\]

This strengthened version can yield improvements. For instance, if f is \beta-smooth

    \[f''(x) \le \beta \quad \text{for every } x \in \real,\]

then we have

    \[f(0) \le \mathbb{E} f(X) \le f(0) + \frac{1}{2}\beta.\]

This inequality isn’t too hard to prove directly, but it does show that we’ve obtained something more than the simple proof of Jensen’s inequality.

Analyzing the Remainder Term
Let us quickly check that the remainder term vanishes \mathrm{Rem}(\delta)/\delta as \delta \to 0. Let’s do this. As an exercise, you can verify that our technical regularity condition implies \mathbb{E} |f'''(\xi)|^p < +\infty. Thus, by Hölder’s inequality and setting q to be p‘s Hölder conjugate (1/p = 1/q = 1), we obtain

    \[\frac{|\mathrm{Rem}(\delta)|}{\delta} = \frac{|\mathbb{E}[f'''(\xi) \Delta^3]|}{6\delta} \le  \frac{(|\mathbb{E} |f'''(\xi)|^p)^{1/p}| (\mathbb{E} |\Delta|^{3q})^{1/q}}{6\delta}.\]


One can show that (\mathbb{E} |\Delta|^{3q})^{1/q} \le C(q) \delta^{3/2} where C(q) is a function of q alone. Therefore, |\mathrm{Rem}(\delta)|/\delta \le \mathrm{constant} \cdot \delta^{1/2} \to 0 as \delta \downarrow 0.

What’s Really Going On Here?

In our proof, we use a family of random variables X_t \sim \mathcal{N}(0,t), defined for each 0\le t \le 1. Rather than treating these quantities as independent, we can think of them as a collective, comprising a random function t \mapsto X_t known as a Brownian motion.

The Brownian motion is a very natural way of interpolating between a constant \mu and a Gaussian with mean \mu.2The Ornstein–Uhlenbeck process is another natural way of interpolating between a random variable and a Gaussian.

There is an entire subject known as stochastic calculus which allows us to perform computations with Brownian motion and other random processes. The rules of stochastic calculus can seem bizarre at first. For a function f of a real number x, we often write

    \[df = f'(x) \, dx\]

For a function f(X_t) of a Brownian motion, the analog is Itô’s formula

    \[df = f'(X_t) \, dX_t + \frac{1}{2} f''(X_t) \, dt.\]

While this might seem odd at first, this formula may seem more sensible if we compare with (1) above. The idea, very roughly, is that for an increment of the Brownian motion dX_t over a time interval dt, (dX_t)^2 is a random variable with mean dt, so we cannot drop the second term in the Taylor series, even up to first order in dt. Fully diving into the subtleties of stochastic calculus is far beyond the scope of this short post. Hopefully, the rest of this post, which outlines some extensions of our proof of Jensen’s inequality that require more stochastic calculus, will serve as an enticement to learn more about this beautiful subject.

Proving Jensen by Interpolation

For the rest of this post, we will be less careful with mathematical technicalities. We can use the same idea that we used to prove Jensen’s inequality for a Gaussian random variable to prove Jensen’s inequality for any random variable Y:

    \[f(\mathbb{E}Y) \le \mathbb{E}[f(Y)].\]

Here is the idea of the proof.

First, realize that we can write any random variable Y as a function of a standard Gaussian random variable X. Indeed, letting F_X and F_Y denote the cumulative distribution functions of X and Y, one can show that

    \[g(X) := \inf \{ \alpha \in \real : F_Y(\alpha) \ge F_X(X) \}\]

has the same distribution as Y.

Now, as before, we can interpolate between \mathbb{E} Y and Y using a Brownian motion. As a first, idea, we might try

    \[A_t \stackrel{?}{=} \mathbb{E} [f(g(X_t))].\]

Unfortunately, this choice of A_t does not work! Indeed, A_0 = \mathbb{E}[f(g(0))] does not even equal to \mathbb{E} [f(Y)]! Instead, we must define

    \[A_t = \mathbb{E} [f(\mathbb{E}[g(X_1) \mid X_t])].\]

We define A_t using the conditional expectation of the final value g(X_1) conditional on the Brownian motion X_t at an earlier time t. Using a bit of elbow grease and stochastic calculus, one can show that

    \[\frac{d}{dt} A_t \ge 0 \quad \text{for all }t\in [0,1].\]

This provides a proof of Jensen’s inequality in general by the method if interpolation.

Gaussian Jensen Inequality

Now, we’ve come to the real treat, the Gaussian Jensen inequality. In the last section, we saw the sketch of a proof of Jensen’s inequality using interpolation. While it is cool that this proof is possible, we learned anything new since we can prove Jensen’s inequality in other ways. The Gaussian Jensen inequality provides an application of this technique which is hard to prove other ways. This section, in particular, is cribbing quite heavily from Paata Ivanishvili‘s excellent post on the topic.

Here’s the big question:

If Y_1,\ldots,Y_n are “somewhat dependent”, for which functions does the multivariate Jensen’s inequality

(\star)   \[f(\mathbb{E} Y_1,\ldots,\mathbb{E}Y_n) \le \mathbb{E} [f(Y_1,\ldots,Y_n)] \]

hold?

Considering extreme cases, if Y_1,\ldots,Y_n are entirely dependent, then we would only expect (\star) to hold when f is convex. But if Y_1,\ldots,Y_n are independent, then we can apply Jensen’s inequality to each coordinate one at a time to deduce

    \[\text{($\star$) holds if $f$ is convex in each coordinate, separately.}\]

We would like a result which interpolates between extremes {fully dependent, fully convex} and {independent, separately convex}. The Gaussian Jensen inequality provides exactly this tool.

As in the previous section, we can generate arbitrary random variables Y_1,\ldots,Y_n as functions g(X_1),\ldots,g(X_n) of Gaussian random variables X_1,\ldots,X_n. We will use the covariance matrix \Sigma of the Gaussian random variables X_1,\ldots,X_n as our measure of the dependence of the random variables Y_1,\ldots,Y_n. With this preparation in place, we have the following result:

Gaussian Jensen inequality. The conclusion of Jensen’s inequality

(2)   \[f(\mathbb{E}g_1(X_1),\ldots,\mathbb{E}g_n(X_n)) \le \mathbb{E} [f(g(X_1),\ldots,g(X_n))]\]

holds for all test functions g_1,\ldots,g_n if and only if

    \[\Sigma \circ \nabla^2 f(x) \text{ is positive semidefinite} \quad \text{for all $x \in \real^n$}.\]

Here, \nabla^2 f(x) is the Hessian matrix at x and \circ denotes the entrywise product of matrices.

This is a beautiful result with striking consequences (see Ivanishvili‘s post). The proof is essentially the same as the proof as Jensen’s inequality by interpolation with a little additional bookkeeping.

Let us confirm this result respects our extreme cases. In the case where X_1=\cdots=X_n are equal (and variance one), \Sigma is a matrix of all ones and \Sigma \circ \nabla^2 f(x) = \nabla^2 f(x) for all x. Thus, the Gaussian Jensen inequality states that (2) holds if and only if \nabla^2 f(x) is positive semidefinite for every x, which occurs precisely when f is convex.

Next, suppose that X_1,\ldots,X_n are independent and variance one, then \Sigma is the identity matrix and

    \[\Sigma \circ \nabla^2 f(x) = \mathrm{diag} \left( \frac{\partial^2 f}{\partial x_i^2} : i=1,\ldots,n \right).\]

A diagonal matrix is positive semidefinite if and only if its entries are nonnegative. Thus, (2) holds if and only if each of f‘s diagonal second derivatives are nonnegative \partial_{x_i}^2 f \ge 0: this is precisely the condition for f to be separately convex in each argument.

There’s much more to be said about the Gaussian Jensen inequality, and I encourage you to read Ivanishvili‘s post to see the proof and applications. What I find so compelling about this result—so compelling that I felt the need to write this post—is how interpolation and stochastic calculus can be used to prove inequalities which don’t feel like stochastic calculus problems. The Gaussian Jensen inequality is a statement about functions of dependent Gaussian random variables; there’s nothing dynamic happening. Yet, to prove this result, we inject dynamics into the problem, viewing the two sides of our inequality as endpoints of a random process connecting them. This is a such a beautiful idea that I couldn’t help but share it.

Big Ideas in Applied Math: Markov Chains

In this post, we’ll talk about Markov chains, a useful and general model of a random system evolving in time.

PageRank

To see how Markov chains can be useful in practice, we begin our discussion with the famous PageRank problem. The goal is assign a numerical ranking to each website on the internet measuring how important it is. To do this, we form a mathematical model of an internet user randomly surfing the web. The importance of each website will be measured by the amount of times this user visits each page.

The PageRank model of an internet user is as follows: Start the user at an arbitrary initial website x_0. At each step, the user makes one of two choices:

  • With 85% probability, the user follows a random link on their current website.
  • With 15% probability, the user gets bored and jumps to a random website selected from the entire internet.

As with any mathematical model, this is a riduculously oversimplified description of how a person would surf the web. However, like any good mathematical model, it is useful. Because of the way the model is designed, the user will spend more time on websites with many incoming links. Thus, websites with many incoming links will be rated as important, which seems like a sensible choice.

An example of the PageRank distribution for a small internet is shown below. As one would expect, the surfer spends a large part of their time on website B, which has many incoming links. Interestingly, the user spends almost as much of their time on website C, whose only links are to and from B. Under the PageRank model, a website is important if it is linked to by an important website, even if that is the only website linking to it.

Markov Chains in General

Having seen one Markov chain, the PageRank internet surfer, let’s talk about Markov chains in general. A (time-homogeneous) Markov chain consists of two things: a set of states and probabilities for transitioning between states:

  • Set of states. For this discussion, we limit ourselves to Markov chains which can only exist in finitely many different states. To simplify our discussion, label the possible states using numbers 1,2,\ldots,m.
  • Transition probabilities. The definining property of a (time-homogeneous) Markov chain is that, at any point in time n, if the state is i, the probability of moving to state j is a fixed number P_{ij}. In particular, the probability P_{ij} of moving from i to j does not depend on the time n or the past history of the chain before time n; only the value of the chain at time n matters.

Denote the state of the Markov chain at times 0,1,2,\ldots by x_0,x_1,x_2,\ldots. Note that the states x_0,x_1,x_2,\ldots are random quantities. We can write the Markov chain property using the language of conditional probability:

    \[\mathbb{P} \{ x_{n+1} = j \mid x_n = i,x_{n-1}=a_{n-1},\ldots,x_0=a_0\} = \mathbb{P}\{x_{n+1} = j \mid x_n = i\} = P_{ij}.\]

This equation states that the probability that the system is in state j at time n+1 given the entire history of the system depends only on the value x_n = i of the chain at time n. This probability is the transition probability P_{ij}.

Let’s see how the PageRank internet surfer fits into this model:

  • Set of states. Here, the set of states are the websites, which we label 1,\ldots,m.
  • Transition probabilities. Consider two websites i and j. If i does not have a link to j, then the only way of going from i to j is if the surfer randomly gets bored (probability 15%) and picks website j to visit at random (probability 1/m). Thus,

    (i\not\to j)   \[P_{ij} = \frac{0.15}{m}. \]

    Suppose instead that i does link to j and i has d_i outgoing links. Then, in addition to the 0.15/m probability computed before, user i has an 85% percent chance of following a link and a 1/d_i chance of picking j as that link. Thus,

    (i\to j)   \[P_{ij} = \frac{0.85}{d_i} + \frac{0.15}{m}. \]

Markov Chains and Linear Algebra

For a non-random process y_0,y_1,y_2,\ldots, we can understand the processes evolution by determining its state y_n at every point in time n. Since Markov chains are random processes, it is not enough to track the state x_n of the process at every time n. Rather, we must understand the probability distribution of the state x_n at every point in time n.

It is customary in Markov chain theory to represent a probability distribution on the states \{1,\ldots,n\} by a row vector \rho^\top.1To really emphasize that probability distributions are row vectors, we shall write them as transposes of column vectors. So \rho is a column vector but \rho^\top represents the probability distribution as is a row vector. The ith entry \rho_i stores the probability that the system is in state i. Naturally, as \rho^\top is a probability distribution, its entries must be nonnegative (\rho_i \ge 0 for every i) and add to one (\sum_{i=1}^n \rho_i = 1).

Let (\rho^{(0)})^\top, (\rho^{(1)})^\top,\ldots denote the probability distributions of the states x_0,x_1,\ldots. It is natural to ask: How are the distributions (\rho^{(0)})^\top, (\rho^{(1)})^\top,\ldots related to each other? Let’s answer this question.

The probability that x_{n+1} is in state j is the jth entry of \rho^{(n+1)}:

    \[\rho^{(n+1)}_j = \mathbb{P} \{x_{n+1} = j\}\]

To compute this probability, we break into cases based on the value of the process at time n: either x_n = 1 or x_n = 2 or … or x_n = m; only one of these cases can be true at once. When we have an “or” of random events and these events are mutually exclusive (only can be true at once), then the probabilities add:

    \[\rho^{(n+1)}_j = \mathbb{P} \{x_{n+1} = j\} = \sum_{i=1}^m \mathbb{P} \{x_{n+1} = j, x_n = i\}.\]

Now we use some conditional probability. The probability that x_{n+1} = j and x_n = i is the probability that x_n = i times the probability that x_{n+1} = j conditional on x_n = i. That is,

    \[\rho^{(n+1)}_j = \sum_{i=1}^m \mathbb{P} \{x_{n+1} = j, x_n = i\} = \sum_{i=1}^m \mathbb{P} \{x_n = i\} \mathbb{P}\{x_{n+1} = j \mid x_n = i\}.\]

Now, we can simplify using our definitions. The probability that x_n = i is just \rho^{(n)}_i and the probability of moving from i to j is P_{ij}. Thus, we conclude

    \[\rho_j^{(n+1)} = \sum_{i=1}^m \rho^{(n)}_i P_{ij} .\]

Phrased in the language of linear algebra, we’ve shown

    \[\left(\rho^{(n+1)}\right)^\top = \left(\rho^{(n)}\right)^\top P \quad \text{for any } n = 0,1,2,\ldots.\]

That is, if we view the transition probabilities P_{ij} as comprising an m\times m matrix P, then the distribution at time n+1 is obtained by multiplying the distribution at time n by transition matrix P. In particular, if we iterate this result, we obtain that the distribution at time n is given by

    \[\left(\rho^{(n)}\right)^\top = \left(\rho^{(n-1)}\right)^\top P = \left[\left(\rho^{(n-2)}\right)^\top P\right]P = \left(\rho^{(n-2)}\right)^\top P^2 = \cdots = \left(\rho^{(0)}\right)^\top P^n.\]

Thus, the distribution at time n is the distribution at time 0 multiplied by the nth power of the transition matrix P.

Convergence to Stationarity

Let’s go back to our web surfer again. At time 0, we started our surfer at a particular website, say 1. As such, the probability distribution2To keep notation clean going forward, we will drop the transposes off of probability distributions, except when working with them linear algebraically. \rho^{(0)} at time 0 is concentrated just on website 1, with no other website having any probability at all. In the first few steps, most of the probability will remain in the vacinity of website 1, in the websites linked to by 1 and the websites linked to by the websites linked to by 1 and so on. However, as we run the chain long enough, the surfer will have time to widely across the web and the probability distribution will become less and less influenced by the chain’s starting location. This motivates the following definition:

Definition. A Markov chain satisfies the mixing property if the probability distributions \rho^{(0)}, \rho^{(1)}, \ldots converge to a single fixed probability distribution \pi regardless of how the chain is initialized (i.e., independent of the starting distribution \rho^{(0)}).

The distribution \pi for a mixing Markov chain is known as a stationary distribution because it does not change under the action of P:

(St)   \[\pi^\top = \pi^\top P. \]

To see this, recall the recurrence

    \[\left(\rho^{(n+1)}\right)^\top = \left(\rho^{(n)}\right)^\top P,\]

take the limit as n\to\infty, and observe that both (\rho^{(n+1)})^\top and (\rho^{(n)})^\top converge to \pi^\top.

One of the basic questions in the theory of Markov chains is finding conditions under which the mixing property (or suitable weaker versions of it) hold. To answer this question, we will need the following definition:

A Markov chain is primitive if, after running the chain for some number n steps, the chain has positive probability of moving between any two states. That is,

    \[\text{There exists $n$ such that, for any $i,j = 1,2,\ldots,m$, } \quad\mathbb{P}\{x_n = j \mid x_0 = i \} = (P^n)_{ij} > 0.\]

The fundamental theorem of Markov chains is that primitive chains satisfy the mixing property.

Theorem (fundamental theorem of Markov chains). Every primitive Markov chain is mixing. In particular, there exists one and only probability distribution \pi satisfying the stationary property (St) and the probability distributions \rho^{(0)},\rho^{(1)},\ldots converge to \pi when initialized in any probability distribution \rho^{(0)}. Every entry of \pi is strictly positive.

Let’s see an example of the fundamental theorem with the PageRank surfer. After n=1 step, there is at least a 0.15/m > 0 chance of moving from any website i to any other website j. Thus, the chain is primitive. Consequently, there is a unique stationary distribution \pi, and the surfer will converge to this stationary distribution regardless of which website they start at.

Going Backwards in Time

Often, it is helpful to consider what would happen if we ran a Markov chain backwards in time. To see why this is an interesting idea, suppose you run website w and you’re interested in where your traffic is coming from. One way of achieving this would be to initialize the Markov chain at w and run the chain backwards in time. Rather than asking, “given I’m at w now, where would a user go next?”, you ask “given that I’m at w now, where do I expect to have come from?”

Let’s formalize this notion a little bit. Consider a primitive Markov chain x_0,x_1,x_2,\ldots with stationary distribution \pi. We assume that we initialize this Markov chain in the stationary distribution. That is, we pick \rho^{(0)} = \pi as our initial distribution for x_0. The time-reversed Markov chain y_0,y_1,\ldots is defined as follows: The probability P^{\rm rev}_{ij} of moving from i to j in the time-reversed Markov chain is the probability that I was at state j one step previously given that I’m at state i now:

    \[\mathbb{P} \{y_1 = j \mid y_0 = i\} = P^{\rm rev}_{ij} = \mathbb{P} \{ x_0 = j \mid x_1 = i \}.\]

To get a nice closed form expression for the reversed transition probabilities P^{\rm rev}_{ij}, we can invoke Bayes’ theorem:

(Rev)   \[P^{\rm rev}_{ij} = \mathbb{P} \{ x_0 = j \mid x_1 = i \} = \frac{\mathbb{P} \{x_0 = j\} \mathbb{P} \{x_1 = i \mid x_0 = j\}}{\mathbb{P} \{x_1 = i\}} = \frac{ \pi_j P_{ji}}{\pi_i}. \]

The time-reversed Markov chain can be a strange beast. For the reversed PageRank surfer, for instance, follows links “upstream” traveling from the linked site to the linking site. As such, our hypothetical website owner could get a good sense of where their traffic is coming from by initializing the reversed chain y_0 = w at their website and following the chain one step back.

Reversible Markov Chains

We now have two different Markov chains: the original and its time-reversal. Call a Markov chain reversible if these processes are the same. That is, if the transition probabilities are the same:

    \[P^{\rm rev}_{ij} = P_{ij} \quad \text{for every } i,j=1,2,\ldots,m.\]

Using our formula (Rev) for the reversed transition probability, the reversibility condition can be written more concisely as

    \[\pi_i P_{ij} = \pi_j P_{ji}.\]

This condition is referred to as detailed balance.3There is an abstruse—but useful—way of reformulating the detailed balance condition. Think of a vector f \in \mathbb{R}^m as defining a function on the set \{1,\ldots,m\}, f : i \mapsto f(i) \coloneqq f_i. Letting x denote a random variable drawn from the stationary distribution x \sim \pi, we can define a non-standard inner product on \mathbb{R}^m: \langle f, g\rangle_{\pi} \coloneqq \mathbb{E}[f(x) g(x)] = \sum_{i=1}^m \pi_i f(i)g(i). Then the Markov chain is reversible if and only if detailed balance holds if and only if P is a self-adjoint operator on \mathbb{R}^m when equipped with the non-standard inner product \langle \cdot,\cdot\rangle_\pi. This more abstract characterization has useful consequences. For instance, by the spectral theorem, the transition matrix P of a reversible Markov chain has real eigenvalues and supports a basis of orthonormal eigenvectors (in the \langle \cdot,\cdot\rangle_\pi inner product). In words, it states that a Markov chain is reversible if, when initialized in the stationary distribution \pi, the flow of probability mass from i to j (that is, \pi_i P_{ij}) is equal to the flow of probability mass from j to i (that is, \pi_jP_{ji}).

Many interesting Markov chains are reversible. One class of examples are Markov chain models of physical and chemical processes. Since physical laws like classical and quantum mechanics are reversible under time, so too should we expect Markov chain models built from theories to be reversible.

Not every interesting Markov chain is reversible, however. Indeed, except in special cases, the PageRank Markov chain is not reversible. If i links to j but. j does not link to i, then the flow of mass from i to j will be higher than the flow from j to i.

Before moving on, we note one useful fact about reversible Markov chains. Suppose a reversible, primitive Markov chain satisfies the detailed balance condition with a probability distribution \sigma:

    \[\sigma_i P_{ij} = \sigma_j P_{ji}.\]

Then \sigma = \pi is the stationary distribution of this chain. To see why, we check the stationarity condition \sigma^\top P = \sigma^\top. Indeed, for every j,

    \[(\sigma^\top P)_j = \sum_{i=1}^m \sigma_i P_{ij} = \sum_{i=1}^m \sigma_j P_{ji} = \sigma_j.\]

The second equality is detailed balance and the third equality is just the condition that the sum of the transition probabilities from j to each i is one. Thus, \sigma^\top P = \sigma^\top and \sigma is a stationary distribution for P. But a primitive chain has only one stationary distribution \pi, so \sigma = \pi.

Markov Chains as Algorithms

Markov chains are an amazingly flexible tool. One use of Markov chains is more scientific: Given a system in the real world, we can model it by a Markov chain. By simulating the chain or by studying its mathematical properties, we can hope to learn about the system we’ve modeled.

Another use of Markov chains is algorithmic. Rather than thinking of the Markov chain as modeling some real-world process, we instead design the Markov chain to serve a computationally useful end. The PageRank surfer is one example. We wanted to rank the importance of websites, so we designed a Markov chain to achieve this task.

One task we can use Markov chains to solve are sampling problems. Suppose we have a complicated probability distribution \pi, and we want a random sample from \pi—that is, a random quantity y such that \mathbb{P} \{ y = i \} = \pi_i for every i. One way to achieve this goal is to design a primitive Markov chain with stationary distribution \pi. Then, we run the chain for a large number of steps n and use x_n as an approximate sample from \pi.

To design a Markov chain with stationary distribution \pi, it is sufficient to generate transition probabilities P such that \pi and P satisfy the detailed balance condition. Then, we are guaranteed that \pi is a stationary distribution for the chain. (We also should check the primitiveness condition, but this is often straightforward.)

Here is an effective way of building a Markov chain to sample from a distribution \pi. Suppose that the chain is in state i at time n, x_n = i. To choose the next state, we begin by sampling j from a proposal distribution T. The proposal distribution T can be almost anything we like, as long as it satisfies three conditions:

  • Probability distribution. For every i, the transition probabilitie T_{ij} add to one: \sum_{j=1}^m T_{ij} = 1.
  • Bidirectional. If T_{ij} > 0, then T_{ji} > 0.
  • Primitive. The transition probabilities T form a primitive Markov chain.

In order to sample from the correct distribution, we can’t just accept every proposal. Rather, given the proposal i\to j, we accept with probability

    \[\min \left\{ 1 , \frac{\pi_j T_{ji}}{\pi_i T_{ij}} \right\}.\]

If we accept the proposal, the next state of our chain is x_{n+1} = j. Otherwise, we stay where we are x_{n+1} = i. This Markov chain is known as a Metropolis–Hastings sampler.

For clarity, we list the steps of the Metropolis–Hastings sampler explicitly:

  1. Initialize the chain in any state x_0 and set n := 0.
  2. Draw a proposal x' with from the proposal distribution, \mathbb{P} \{ x' = j \} = T_{x_nj}.
  3. Compute the acceptance probability

        \[p_{\rm acc} := \min \left\{ 1 , \frac{\pi_j T_{ji}}{\pi_i T_{ij}} \right\}.\]

  4. With probability p_{\rm acc}, set x_{n+1} := x'. Otherwise, set x_{n+1} := x_n.
  5. Set n := n+1 and go back to step 2.

To check that \pi is a stationary distribution of the Metropolis–Hastings distribution, all we need to do is check detailed balance. Note that the probability P_{ij} of transitioning from i to j\ne i under the Metropolis–Hastings sampler is the proposal probability T_{ij} times the acceptance probability:

    \[P_{ij} = T_{ij} \cdot \min \left\{ 1 , \frac{\pi_j T_{ji}}{\pi_i T_{ij}} \right\}.\]

Detailed balance is confirmed by a short computation4Note that the detailed balance condition for i = j is always satisfied for any Markov chain \pi_i P_{ii} = \pi_i P_{ii}.

    \[\pi_i P_{ij} = \pi_i T_{ij} \cdot \min \left\{ 1 , \frac{\pi_j T_{ji}}{\pi_i T_{ij}} \right\} = \min \left\{ \pi_i T_{ij} , \pi_j T_{ji} \right\} = \pi_j P_{ji}.\]

Thus the Metropolis–Hastings sampler has \pi as stationary distribution.

Determinatal Point Processes: Diverse Items from a Collection

The uses of Markov chains in science, engineering, math, computer science, and machine learning are vast. I wanted to wrap up with one application that I find particularly neat.

Suppose I run a bakery and I sell N different baked goods. I want to pick out k special items for a display window to lure customers into my store. As a first approach, I might pick my top-k selling items for the window. But I realize that there’s a problem. All of my top sellers are muffins, so all of the items in my display window are muffins. My display window is doing a good job luring in muffin-lovers, but a bad job of enticing lovers of other baked goods. In addition to rating the popularity of each item, I should also promote diversity in the items I select for my shop window.

Here’s a creative solution to my display case problems using linear algebra. Suppose that, rather than just looking at a list of the sales of each item, I define a matrix A for my baked goods. In the iith entry A_{ii} of my matrix, I write the number of sales for baked good i. I populate the off-diagonal entries A_{ij} of my matrix with a measure of similarity between items i and j.5There are many ways of defining such a similarity matrix. Here is one way. Let z_1,\ldots,z_N be the number ordered for each bakery item by a random customer. Set \Sigma to be the correlation matrix of the random variables z_1,\ldots,z_N, with \Sigma_{ij} being the correlation between the random variables z_i and z_j. The matrix \Sigma has all ones on its diagonal. The off-diagonal entries \Sigma_{ij} measure the amount that items i and j tend to be purchased together. Let D be a diagonal matrix where D_{ii} is the total sales of item i. Set A \coloneqq D^{1/2}\Sigma D^{1/2}. By scaling \Sigma by the diagonal matrix D, the diagonal entries of A represent the popularity of each item, whereass the off-diagonal entries still represent correlations, now scaled by popularity. So if i and j are both muffins, A_{ij} will be large. But if i is a muffin and j is a cookie, then A_{ij} will be small. For mathematical reasons, we require A to be symmetric and positive definite.

To populate my display case, I choose a random subset of k items from my full menu of size N according to the following strange probability distribution: The probability \pi_S of picking items S = \{s_1,\ldots,s_k\} \subseteq \{1,\ldots,N\} is proportional to the determinant of the submatrix A(S,S). More specifically,

(k-DPP)   \[\pi_S = \frac{\det A(S,S)}{\sum_{\text{all subsets $T$ of size $k$}} \det A(T,T)}. \]

Here, we let A(S,S) denote the k\times k submatrix of A consisting of the entries appearing in rows and columns s_1,\ldots,s_k. Such a random subset is known as a k-determinantal point process (k-DPP). (See this survey for more about DPPs.)

To see why this makes any sense, let’s consider a simple example of N = 3 items and a display case of size k = 2. Suppose I have three items: a pumpkin muffin, a chocolate chip muffin, and an oatmeal raisin cookies. Say the A matrix looks like

    \[A = \begin{bmatrix} 10 & 9 & 0 \\ 9 & 10 & 0 \\ 0 & 0 & 5 \end{bmatrix}.\]

We see that both muffins are equally popular A_{11} = A_{22} = 10 and much more popular than the cookie A_{33} = 5. However, the two muffins are similar to each other and thus the corresponding submatrix has small determinant

    \[\det A(\{1,2\},\{1,2\}) = \det \twobytwo{10}{9}{9}{10} = 19.\]

By contrast, if the cookie is disimilar to each muffin and the determinant is higher

    \[\det A(\{1,3\},\{1,3\}) = \det A(\{2,3\},\{2,3\}) = \det \twobytwo{10}{0}{0}{5} = 50.\]

Thus, even though the muffins are more popular overall, choosing our display case from a 2-DPP, we have a (50+50) / (50+50+19) \approx 84\% chance of choosing a muffin and a cookie for our display case. It is for this reason that we can say that a k-DPP preferentially selects for diverse items.

Is sampling from a k-DPP the best way of picking k items for my display case? How does it compare to other possible methods?6Another method I’m partial to for this task is randomly pivoted Cholesky sampling, which is computationally cheaper than k-DPP sampling even with the Markov chain sampling approach to k-DPP sampling that we will discuss shortly. These are interesting questions for another time. For now, let us focus our attention on a different question: How would you sample from a k-DPP?

Determinantal Point Process by Markov Chains

Sampling from a k-DPP is a hard computational problem. Indeed, there are {N \choose k} possible k-element subspaces of a set of N items. The number of possibilities gets large fast. If I have N = 100 items and want to pick k = 10 of them, there are already over 10 trillion possible combinations.

Markov chains offer one compelling way of sampling a k-DPP. First, we need a proposal distribution. Let’s choose the simplest one we can think of:

Proposal for k-DPP sampling. Suppose our current set of k items is S = \{s_1,\ldots,s_k\}. To generate a proposal, choose a uniformly random element s_{\rm old} out of S and a uniformly random element s_{\rm new} out of \{1,\ldots,N\} without S. Propose S' obtained from S by replacing s_{\rm old} with s_{\rm new} (i.e., S' = S \cup \{s_{\rm new}\} \setminus \{s_{\rm old}\}).

Now, we need to compute the Metropolis–Hastings acceptance probability

    \[p_{\rm acc} = \min \left\{ 1 , \frac{\pi_{S'} T_{S'S}}{\pi_{S} T_{SS'}} \right\}.\]

For S and S' which differ only by the addition of one element and the removal of another, the proposal probabilities T_{S'S} and T_{SS'} are both equal to 1/(kN), T_{S'S} = T_{SS'} = 1/(kN). Using the formula for the probability \pi_S of drawing S from a k-DPP, we compute that

    \[\frac{\pi_{S'}}{\pi_S} = \frac{\det A(S',S')}{\det A(S,S)}.\]

Thus, the Metropolis–Hastings acceptance probability is just a ratio of determinants:

(Acc)   \[p_{\rm acc} = \min \left\{ 1 , \frac{\pi_{S'} T_{S'S}}{\pi_{S} T_{SS'}} \right\} = \min \left\{ 1, \frac{\det A(S',S')}{\det A(S,S)} \right\}. \]

And we’re done. Let’s summarize our sampling algorithm:

  1. Choose an initial set S_0 arbitrarily and set n := 0.
  2. Draw s_{\rm old} uniformly at random from S_n.
  3. Draw s_{\rm new} uniformly at random from \{1,\ldots,N\} \setminus S_n.
  4. Set S' := S_n \cup \{s_{\rm new}\} \setminus \{s_{\rm old}\}.
  5. With probability p_{\rm acc} defined in (Acc), accept and set S_{n+1} := S'. Otherwise, set S_{n+1} := S_n.
  6. Set n := n+1 and go to step 2.

This is a remarkably simple algorithm to sample from a complicated distribution. And its fairly efficient as well. Analysis by Anari, Oveis Gharan, and Rezaei shows that, when you pick a good enough initial set S_0, this sampling algorithm produces approximate samples from a k-DPP in roughly Nk^2 steps.7They actually use a slight variant of this algorithm where the acceptance probabilities (Acc) are reduced by a factor of two. Observe that this still has the correct stationary distribution because detailed balance continues to hold. The extra factor is introduced to ensure that the Markov chain is primitive. Remarkably, if k is much smaller than N, this Markov chain-based algorithm samples from a k-DPP without even looking at all N^2 entries of the matrix A!

Upshot. Markov chains are a simple and general model for a state evolving randomly in time. Under mild conditions, Markov chains converge to a stationary distribution: In the limit of a large number of steps, the state of the system become randomly distributed in a way independent of how it was initialized. We can use Markov chains as algorithms to approximately sample from challenging distributions.

Note to Self: Norm of a Gaussian Random Vector

Let g be a standard Gaussian vector—that is, a vector populated by independent standard normal random variables. What is the expected length \mathbb{E} \|g\| of g? (Here, and throughout, \|\cdot\| denotes the Euclidean norm of a vector.) The length of g is the square root of the sum of n independent standard normal random variables

    \[\|g\| = \sqrt{g_1^2 + \cdots + g_n^2},\]

which is known as a \chi random variable with n degrees of freedom. (Not to be confused with a \chi^\mathbf{2} random variable!) Its mean value is given by the rather unpleasant formula

    \[\mathbb{E} \|g\| = \sqrt{2} \frac{\Gamma((n+1)/2)}{\Gamma(n/2)},\]

where \Gamma(\cdot) is the gamma function. If you are familiar with the definition of the gamma function, the derivation of this formula is not too hard—it follows from a change of variables to n-dimensional spherical coordinates.

This formula can be difficult to interpret and use. Fortunately, we have the rather nice bounds

(1)   \[\sqrt{n-1} < \frac{n}{\sqrt{n+1}} < \mathbb{E} \|g\| < \sqrt{n}. \]

This result appears, for example, page 11 of this paper. These bounds show that, for large n, \mathbb{E} \|g\| is quite close to \sqrt{n}. The authors of the paper remark that this inequality can be probed by induction. I had difficulty reproducing the inductive argument for myself. Fortunately, I found a different proof which I thought was very nice, so I thought I would share it here.

Our core tool will be Wendel’s inequality (see (7) in Wendel’s original paper): For x > 0 and 0 < s < 1, we have

(2)   \[\frac{x}{(x+s)^{1-s}} < \frac{\Gamma(x+s)}{\Gamma(x)} < x^s. \]

Let us first use Wendel’s inequality to prove (1). Indeed, invoke Wendel’s inequality with x = n/2 and s = 1/2 and multiply by \sqrt{2} to obtain

    \[\frac{\sqrt{2} \cdot n/2}{(n/2+1/2)^{1/2}} < \sqrt{2}\frac{\Gamma((n+1)/2)}{\Gamma(n/2)} = \mathbb{E}\|g\| < \sqrt{2}\cdot \sqrt{n/2},\]

which simplifies directly to (1).

Now, let’s prove Wendel’s inequality (2). The key property for us will be the strict log-convexity of the gamma function: For real numbers x,y > 0 and 0 < s < 1,

(3)   \[\Gamma((1-s)x + sy) < \Gamma(x)^{1-s} \Gamma(y)^s. \]

We take this property as established and use it to prove Wendel’s inequality. First, use the log-convexity property (3) with y = x+1 to obtain

    \[\Gamma(x+s) = \Gamma((1-s)x + s(x+1)) < \Gamma(x)^{1-s} \Gamma(x+1)^s.\]

Divide by \Gamma(x) and use the property that \Gamma(x+1)/\Gamma(x) = x to conclude

(4)   \[\frac{\Gamma(x+s)}{\Gamma(x)} < \left( \frac{\Gamma(x+1)}{\Gamma(x)} \right)^s = x^s. \]

This proves the upper bound in Wendel’s inequality (2). To prove the lower bound, invoke the upper bound (4) with x+s in place of x and 1-s in place of s to obtain

    \[\frac{\Gamma(x+1)}{\Gamma(x+s)} < (x+s)^{1-s}.\]

Multiplying by \Gamma(x+s), dividing by (x+s)^{1-s}\Gamma(x), and using \Gamma(x+1)/\Gamma(x) = x again yields

    \[\frac{\Gamma(x+s)}{\Gamma(x)} > \frac{\Gamma(x+1)}{\Gamma(x)} \cdot \frac{1}{(x+s)^{1-s}} = \frac{x}{(x+s)^{1-s}},\]

finishing the proof of Wendel’s inequality.

Notes. The upper bound in (1) can be proven directly by Lyapunov’s inequality: \mathbb{E} \|g\| \le (\mathbb{E} \|g\|^2)^{1/2} = n^{1/2}, where we use the fact that \|g\|^2 = g_1^2 + \cdots + g_n^2 is the sum of n random variables with mean one. The weaker lower bound \mathbb{E} \|g\| \ge \sqrt{n-1} follows from a weaker version of Wendel’s inequality, Gautschi’s inequality.

After the initial publication of this post, Sam Buchanan mentioned another proof of the lower bound \mathbb{E} \|g\| \ge \sqrt{n-1} using the Gaussian Poincaré inequality. This inequality states that, for a function f : \real^n \to \real,

    \[\Var(f(g)) \le \mathbb{E} \| \nabla f(g)\|^2.\]

To prove the lower bound, set f(g) := \|g\| which has gradient \nabla f(g) = g/\|g\|. Thus,

    \[\mathbb{E} \| \nabla f(g)\|^2 = 1 \ge \Var(f(g)) = \mathbb{E} \|g\|^2 - (\mathbb{E} \|g\|)^2 = n -  (\mathbb{E} \|g\|)^2.\]

Rearrange to obtain \mathbb{E} \|g\| \ge \sqrt{n-1}.

Stochastic Trace Estimation

I am delighted to share that me, Joel A. Tropp, and Robert J. Webber‘s paper XTrace: Making the Most of Every Sample in Stochastic Trace Estimation has recently been released as a preprint on arXiv. In it, we consider the implicit trace estimation problem:

Implicit trace estimation problem: Given access to a square matrix A via the matrix–vector product operation \omega \mapsto A\omega, estimate its trace \tr A = \sum_{i=1}^n A_{ii}.

Algorithms for this task have many uses such as log-determinant computations in machine learning, partition function calculations in statistical physics, and generalized cross validation for smoothing splines. I described another application to counting triangles in a large network in a previous blog post.

Our paper presents new trace estimators XTrace and XNysTrace which are highly efficient, producing accurate trace approximations using a small budget of matrix–vector products. In addition, these algorithms are fast to run and are supported by theoretical results which explain their excellent performance. I really hope that you will check out the paper to learn more about these estimators!

For the rest of this post, I’m going to talk about the most basic stochastic trace estimation algorithm, the GirardHutchinson estimator. This seemingly simple algorithm exhibits a number of nuances and forms the backbone for more sophisticated trace estimates such as Hutch++, Nyström++, XTrace, and XNysTrace. Toward the end, this blog post will be fairly mathematical, but I hope that the beginning will be fairly accessible to all.

Girard–Hutchinson Estimator: The Basics

The GirardHutchinson estimator for the trace of a square matrix A is

(1)   \[\hat{\tr} = \frac{1}{m} \sum_{i=1}^m \omega_i^* A \omega_i. \]

Here, \omega_1,\ldots,\omega_m are random vectors, usually chosen to be statistically independent, and {}^* denotes the conjugate transpose of a vector or matrix. The Girard–Hutchinson estimator only depends on the matrix A through the matrix–vector products A\omega_1,\ldots,A\omega_m.

Unbiasedness

Provided the random vectors are isotropic

(2)   \[\mathbb{E} [\omega_i\omega_i^*] = I, \]

the Girard–Hutchinson estimator is unbiased:

(3)   \[\mathbb{E} [\hat{\tr}] = \tr A.\]

Let us confirm this claim in some detail. First, we use linearity of expectation to evaluate

(4)   \[\mathbb{E} [\hat{\tr}] = \mathbb{E} \left[ \frac{1}{m} \sum_{i=1}^m \omega_i^*A\omega_i \right] = \frac{1}{m} \sum_{i=1}^m \mathbb{E} \left[ \omega_i^* A \omega_i\right]. \]

Therefore, to prove that \mathbb{E} [\hat{\tr}] = \tr A, it is sufficient to prove that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for each i.

When working with traces, there are two tricks that solve 90% of derivations. The first trick is that, if we view a number as a 1\times 1 matrix, then a number equals its trace, x = \tr x. The second trick is the cyclic property: For a k\times p matrix B and a p\times k matrix C, we have \tr (BC) = \tr (CB). The cyclic property should be handled with care when one works with a product of three or more matrices. For instance, we have

    \[\tr[BCD] = \tr[(BC)D] = \tr[D(BC)] = \tr[DBC].\]

However,

    \[\tr [BCD] \ne \tr[CBD] \quad \text{in general}.\]

One should think of the matrix product BCD as beads on a closed loop of string. One can move the last bead D to the front of the other two, \tr [BCD] = \tr[DBC], but not interchange two beads, \tr[BCD] \ne \tr[CBD].

With this trick in hand, let’s return to proving that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for every i. Apply our two tricks:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right].\]

The expectation is a linear operation and the matrix A is non-random, so we can bring the expectation into the trace as

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ).\]

Invoke the isotropy condition (2) and conclude:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ) = \tr(A\cdot I) = \tr A.\]

Plugging this into (4) confirms the unbiasedness claim (3).

Variance

Continue to assume that the \omega_i‘s are isotropic (3) and now assume that \omega_1,\ldots,\omega_m are independent. By independence, the variance can be written as

    \[\Var(\hat{\tr}) = \frac{1}{m^2} \sum_{i=1}^m \Var(\omega_i^*A\omega_i).\]

Assuming that \omega_1,\ldots,\omega_m are identically distributed \omega_1,\ldots,\omega_m \sim \omega, we then get

    \[\Var(\hat{\tr}) = \frac{1}{m} \Var(\omega^*A\omega).\]

The variance decreases like 1/m, which is characteristic of Monte Carlo-type algorithms. Since \hat{\tr} is unbiased (i.e, (3)), this means that the mean square error decays like 1/m so the average error (more precisely root-mean-square error) decays like

    \[\left| \hat{\tr} - \tr A \right| \lessapprox \frac{\mathrm{const}}{\sqrt{m}}.\]

This type of convergence is very slow. If I want to decrease the error by a factor of 10, I must do 100\times the work!

Variance-reduced trace estimators like Hutch++ and our new trace estimator XTrace improve the rate of convergence substantially. Even in the worst case, Hutch++ and XTrace reduce the variance at a rate 1/m^2 and (root-mean-square) error at rates 1/m:

    \[\Var(\hat{\tr}_{\text{H++ or X}}) \le \frac{\mathrm{const}}{m^2},\quad \left| \hat{\tr}_{\text{H++ or X}} - \tr A \right| \lessapprox \frac{\mathrm{const}}{m}.\]

For matrices with rapidly decreasing singular values, the variance and error can decrease much faster than this.

Variance Formulas

As the rate of convergence for the Girard–Hutchinson estimator is so slow, it is imperative to pick a distribution on test vectors \omega that makes the variance of the single–sample estimate \omega^*A\omega as low as possible. In this section, we will provide several explicit formulas for the variance of the Girard–Hutchinson estimator. Derivations of these formulas will appear at the end of this post. These variance formulas help illuminate the benefits and drawbacks of different test vector distributions.

To express the formulas, we will need some notation. For a complex number z = a + bi we use \Re(z) = a and \Im(z) = b to denote the real and imaginary parts. The variance of a random complex number z is

    \[\Var(z) := \mathbb{E} |z - \mathbb{E} z|^2 = \Var(\Re z) + \Var(\Im z).\]

The Frobenius norm of a matrix A is

    \[\left\|A\right\|_{\rm F}^2 = \sum_{i,j} |A_{ij}|^2.\]

If A is real symmetric or complex Hermitian with (real) eigenvalues \lambda_1,\ldots,\lambda_n, we have

(5)   \[\left\|A\right\|_{\rm F}^2 = \sum_{i=1}^n \lambda_i^2. \]

A^\top denotes the ordinary transpose of A and A^* denotes the conjugate transpose of A.

Real-Valued Test Vectors

We first focus on real-valued test vectors \omega. Since \omega is real, we can use the ordinary transpose {}^\top rather than the conjugate transpose {}^*. Since \omega^\top A\omega is a number, it is equal to its own transpose:

    \[\omega^\top A \omega = (\omega^\top A \omega)^\top = \omega^\top A^\top \omega.\]

Therefore,

    \[\omega^\top A\omega = \frac{\omega^\top A \omega + \omega^\top A^\top \omega}{2} = \omega^\top \left( \frac{A + A^\top}{2} \right)\omega.\]

The Girard–Hutchinson trace estimator applied to A is the same as the Girard–Hutchinson estimator applied to the symmetric part of A, (A+A^\top)/2.

For the following results, assume A is symmetric, A = A^\top.

  1. Real Gaussian: \omega_1,\ldots,\omega_m are independent standard normal random vectors.

        \[\Var(\omega^\top A\omega) = 2 \left\|A\right\|_{\rm F}^2.\]

  2. Uniform signs (Rademachers): \omega_1,\ldots,\omega_m are independent random vectors with uniform \pm 1 coordinates.

        \[\Var(\omega^\top A \omega) = 2\sum_{i\ne j} |A_{ij}|^2.\]

  3. Real sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the real sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \mathbb{R}^n : x^\top x = n\}.

        \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

These formulas continue to hold for nonsymmetric A by replacing A by its symmetric part (A+A^\top)/2 on the right-hand sides of these variance formulas.

Complex-Valued Test Vectors

We now move our focus to complex-valued test vectors \omega. As a rule of thumb, one should typically expect that the variance for complex-valued test vectors applied to a real symmetric matrix A is about half the natural real counterpart—e.g., for complex Gaussians, you get about half the variance than with real Gaussians.

A square complex matrix has a Cartesian decomposition

    \[A = A^{\rm H} + i A^{\rm SH}\]

where

    \[A^{\rm H} = \frac{A+A^*}{2} ,\quad A^{\rm SH} = \frac{A - A^*}{2i}\]

denote the Hermitian and skew-Hermitian parts of A. Similar to how the imaginary part of a complex number is real, the skew-Hermitian part of a complex matrix is Hermitian (and i A^{\rm SH} is skew-Hermitian). Since A^{\rm H} and A^{\rm SH} are both Hermitian, we have

    \[\Re(\omega^* A\omega) = \omega^* A^{\rm H} \omega, \quad \Im (\omega^* A \omega) = \omega^* A^{\rm SH} \omega.\]

Consequently, the variance of \omega^*A \omega can be broken into Hermitian and skew-Hermitian parts:

    \[\Var(\omega^* A\omega) = \Var(\omega^* A^{\rm H}\omega) + \Var(\omega^* A^{\rm SH}\omega).\]

For this reason, we will state the variance formulas only for Hermitian A, with the formula for general A following from the Cartesian decomposition.

For the following results, assume A is Hermitian, A = A^*.

  1. Complex Gaussian: \omega_1,\ldots,\omega_m are independent standard complex random vectors, i.e., each \omega_i has iid entries distributed as (g_1+ig_2)/\sqrt{2} for g_1,g_2 standard normal random variables.

        \[\Var(\omega^* A\omega) = \left\|A\right\|_{\rm F}^2.\]

  2. Uniform phases (Steinhauses): \omega_1,\ldots,\omega_m are independent random vectors whose entries are uniform on the complex unit circle \{ z \in \complex : |z| \}.

        \[\Var(\omega^* A \omega) = \sum_{i\ne j} |A_{ij}|^2.\]

  3. Complex sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the complex sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \complex^n : x^* x = n\}.

        \[\Var(\omega^* A\omega) = \frac{n}{n+1} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

Optimality Properties

Let us finally address the question of what the best choice of test vectors is for the Girard–Hutchinson estimator. We will state two results with different restrictions on \omega_1,\ldots,\omega_m.

Our first result, due to Hutchinson, is valid for real symmetric matrices with real test vectors.

Optimality (independent test vectors with independent coordinates). If the test vectors \omega_1,\ldots,\omega_m \in \mathbb{R}^n are isotropic (2), independent from each other, and have independent entries, then for any fixed real symmetric matrix A, the minimum variance for \hat{\tr} is obtained when \omega_1,\ldots,\omega_m are populated with random signs (\omega_i)_j \sim \textnormal{Uniform} \{\pm 1\}.

The next optimality results will have real and complex versions. To present the results for \mathbb{R}-valued and an \complex-valued test vectors on unified footing, let \field denote either \mathbb{R} or \complex. We let a \field-Hermitian matrix be either a real symmetric matrix (if \field = \mathbb{R}) or a complex Hermitian matrix (if \field = \complex). Let a \field-unitary matrix be either a real orthogonal matrix (if \field = \mathbb{R}) or a complex unitary matrix (if \field = \complex).

The condition that the vectors \omega_1,\ldots,\omega_m have independent entries is often too restrictive in practice. It rules out, for instance, the case of uniform vectors on the sphere. If we relax this condition, we get a different optimal distribution:

Optimality (independent test vectors). Consider any set \mathscr{A} of \field-Hermitian matrices which is invariant under \field-unitary similary transformations:

    \[\text{If $A \in \mathscr{A}$ and $U$ is $\field$-unitary, then $U^*AU \in \mathscr{A}$.}\]

Assume that the test vectors \omega_1,\ldots,\omega_m are independent and isotropic (2). The worst-case variance \sup_{A \in \mathscr{A}} \Var(\hat{\tr}) is minimized by choosing \omega_1,\ldots,\omega_m uniformly on the \field-sphere: \omega_1,\ldots,\omega_m \sim \text{Uniform} \{ x \in \field^n : x^*x =n \}.

More simply, if you wants your stochastic trace estimator to be effective for a class of inputs \mathscr{A} (closed under \field-unitary similarity transformations) rather than a single input matrix A, then the best distribution are test vectors drawn uniformly from the sphere. Examples of classes of matrices \mathscr{A} include:

  • Fixed eigenvalues. For fixed real eigenvalues \lambda_1,\ldots,\lambda_n \in \mathbb{R}, the set of all \field-Hermitian matrices with these eigenvalues.
  • Density matrices. The class of all trace-one psd matrices.
  • Frobenius norm ball. The class of all \field-Hermitian matrices of Frobenius norm at most 1.

Derivation of Formulas

In this section, we provide derivations of the variance formulas. I have chosen to focus on derivations which are shorter but use more advanced techniques rather than derivations which are longer but use fewer tricks.

Real Gaussians

First assume A is real. Since A is real symmetric, A has an eigenvalue decomposition A = Q\Lambda Q^\top, where Q is orthogonal and \Lambda is a diagonal matrix reporting A‘s eigenvalues. Since the real Gaussian distribution is invariant under orthogonal transformations, \omega^\top A\omega = (Q^\top \omega)^\top \Lambda (Q^\top\omega) has the same distribution as \omega^\top \Lambda \omega. Therefore,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \Lambda \omega) = \Var \left( \sum_{i=1}^n \lambda_i \omega_i^2 \right) = \sum_{i=1}^n \lambda_i^2 \Var(\omega_i^2) = 2\sum_{i=1}^n \lambda_i^2 = 2\left\|A\right\|_{\rm F}^2.\]

Here, we used that the variance of a squared standard normal random variable is two.

For A non-real matrix, we can break the matrix A into its entrywise real and imaginary parts A = \mathfrak{R}(A) + i \, \mathfrak{I}(A). Thus,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \mathfrak{R}(A) \omega) + \Var(\omega^\top \mathfrak{I}(A) \omega) = 2\left\|\mathfrak{R}(A)\right\|_{\rm F}^2 + 2\left\|\mathfrak{I}(A)\right\|_{\rm F}^2 = 2\left\|A\right\|_{\rm F}^2.\]

Uniform Signs

First, compute

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = \sum_{i,j=1}^n A_{ij} \omega_i\omega_j - \sum_{i=1}^n A_{ii} = \sum_{i\ne j} A_{ij} \omega_i\omega_j + \sum_{i=1}^n A_{ii}(\omega_i^2-1).\]

For a vector \omega of uniform random signs, we have \omega_i^2 = 1 for every i, so the second sum vanishes. Note that we have assumed A symmetric, so the sum over i\ne j can be replaced by two times the sum over i < j:

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = 2\sum_{i< j} A_{ij} \omega_i\omega_j.\]

Note that \{ \omega_i \omega_j : i < j\} are pairwise independent. As a simple exercise, one can verify that the identity

    \[\Var(a_1 X_1+\cdots+a_kX_k) = |a_1|^2 \Var(X_1) + \cdots + |a_k|^2 \Var(X_k)\]

holds for any pairwise independent family of random variances X_1,\ldots,X_k and numbers a_1,\ldots,a_k. Ergo,

    \begin{align*}\Var(\omega^\top A\omega) &= \Var(\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega]) \\&= \Var\left(\sum_{i< j} 2A_{ij} \omega_i\omega_j\right) \\&= \sum_{i<j} 4 |A_{ij}|^2 \Var(\omega_i\omega_j) \\&= \sum_{i<j} 4 |A_{ij}|^2 \\&= 2 \sum_{i\ne j} |A_{ij}|^2.\end{align*}

In the second-to-last line, we use the fact that \omega_i\omega_j is a uniform random sign, which has variance 1. The final line is a consequence of the symmetry of A.

Uniform on the Real Sphere

The simplest proof is I know is by the “camel principle”. Here’s the story (a lightly edited quotation from MathOverflow):

A father left 17 camels to his three sons and, according to the will, the eldest son was to be given a half of the camels, the middle son one-third, and the youngest son the one-ninth. The sons did not know what to do since 17 is not evenly divisible into either two, three, or nine parts, but a wise man helped the sons: he added his own camel, the oldest son took 18/2=9 camels, the second son took 18/3=6 camels, the third son 18/9=2 camels and the wise man took his own camel and went away.

We are interested in a vector \omega which is uniform on the sphere of radius \sqrt{n}. Performing averages on the sphere is hard, so we add a camel to the problem by “upgrading” \omega to a spherically symmetric vector g which has a random length. We want to pick a distribution for which the computation \Var(g^\top A g) is easy. Fortunately, we already know such a distribution, the Gaussian distribution, for which we already calculated \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2.

The Gaussian vector g and the uniform vector \omega on the sphere are related by

    \[g = \sqrt{\frac{a}{n}} \omega,\]

where a is the squared length of the Gaussian vector g. In particular, a has the distribution of the sum of n squared Gaussian random variables, which is known as a \chi^2 random variable with n degrees of freedom.

Now, we take the camel back. Compute the variance of g^\top A g using the chain rule for variance:

    \[\Var(g^\top A g) = \mathbb{E}[\Var(g^\top A g \mid a)] + \Var(\mathbb{E}[g^\top A g \mid a]).\]

Here, \Var(\cdot \mid a) and \mathbb{E}[ \cdot \mid a] denote the conditional variance and conditional expectation with respect to the random variable a. The quick and dirty ways of working with these are to treat the random variable a “like a constant” with respect to the conditional variance and expectation.

Plugging in the formula g = \sqrt{a/n} \cdot \omega and treating a “like a constant”, we obtain

    \begin{align*}\Var(g^\top A g) &= \mathbb{E}[\Var(a/n \cdot \omega^\top A \omega \mid a)] + \Var(\mathbb{E}[a/n \cdot \omega^\top A \omega \mid a]) \\&=\mathbb{E}[(a/n)^2\Var(\omega^\top A \omega)] + \Var(a/n \cdot \mathbb{E}[\omega^\top A \omega]) \\&= \frac{1}{n^2} \mathbb{E}[a^2] \cdot \Var(\omega^\top A \omega) + \frac{1}{n^2} \Var(a) |\mathbb{E} [\omega^\top A \omega]|^2.\end{align*}

As we mentioned, a is a \chi^2 random variable with n degrees of freedom and \mathbb{E}[a^2] and \Var(a) are known quantities that can be looked up:

    \[\mathbb{E}[a^2] = n(n+2), \quad \Var(a) = 2n.\]

We know \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2 and \mathbb{E} [\omega^\top A \omega] = \tr A. Plugging these all in, we get

    \[2\left\|A\right\|_{\rm F}^2 = \frac{n+2}{n} \Var(\omega^\top A\omega) + \frac{2}{n} |\tr A|^2.\]

Rearranging, we obtain

    \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n}|\tr A|^2\right).\]

Complex Gaussians

The trick is the same as for real Gaussians. By invariance of complex Gaussian random vectors under unitary transformations, we can reduce to the case where A is a diagonal matrix populated with eigenvalues \lambda_1,\ldots,\lambda_n. Then

    \[\Var(\omega^*A \omega) = \Var \left( \sum_{i=1}^n \lambda_i |\omega_i|^2 \right) = \sum_{i=1}^n \Var(|\omega_i|^2) \lambda_i^2 = \sum_{i=1}^n \lambda_i^2 = \left\|A\right\|_{\rm F}^2.\]

Here, we use the fact that 2|\omega_i|^2 is a \chi^2 random variable with two degrees of freedom, which has variance four.

Random Phases

The trick is the same as for uniform signs. A short calculation (remembering that A is Hermitian and thus \overline{A_{ij}} = A_{ji}) reveals that

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

The random variables \{\overline{\omega_i} \omega_j : i < j\} are pairwise independent so we have

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right) = 4\sum_{i<j} \Var \left( \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

Since \overline{\omega}_i \omega_j is uniformly distributed on the complex unit circle, we can assume without loss of generality that A_{ij} = |A_{ij}|. Thus, letting \phi be uniform on the complex unit circle,

    \[\Var\left( \omega^* A \omega \right) = 4\sum_{i<j} \Var \left( |A_{ij}|\Re(\phi)) \right) = 4\Var\left( \Re(\phi) \right)\sum_{i<j}|A_{ij}|^2.\]

The real and imaginary parts of \phi have the same distribution so

    \[1 = \Var(\phi) = \Var(\Re \phi) + \Var(\Im \phi) = 2 \Var(\Re \phi)\]

so \Var(\Re \phi) = 1/2. Thus

    \[\Var\left( \omega^* A \omega \right) = 2 \sum_{i<j}|A_{ij}|^2 = \sum_{i\ne j} |A_{ij}|^2.\]

Uniform on the Complex Sphere: Derivation 1 by Reduction to Real Case

There are at least three simple ways of deriving this result: the camel trick, reduction to the real case, and Haar integration. Each of these techniques illustrates a trick that is useful in its own right beyond the context of trace estimation. Since we have already seen an example of the camel trick for the real sphere, I will present the other two derivations.

Let us begin with the reduction to the real case. Let \mathfrak{R}(\cdot) and \mathfrak{I}(\cdot) denote the real and imaginary parts of a vector or matrix, taken entrywise. The key insight is that if \omega is a uniform random vector on the complex sphere of radius \sqrt{n}, then

    \[\mathscr{R}(\omega) := \twobyone{\mathfrak{R}(\omega)}{\mathfrak{I}(\omega)}\in\real^{2n} \quad \text{is a uniform random vector on the real sphere of radius $\sqrt{n}$}.\]

We’ve converted the complex vector \omega into a real vector \mathscr{R}(\omega).

Now, we need to convert the complex matrix A into a real matrix \mathscr{R}(A). To do this, recall that one way of representing complex numbers is by 2\times 2 matrices:

    \[a + bi \iff \twobytwo{a}{-b}{b}{a}.\]

Using this correspondence addition and multiplication of complex numbers can be carried by addition and multiplication of the corresponding matrices.

To convert complex matrices to real matrices, we use a matrix-version of the same representation:

    \[\mathscr{R}(A) = \twobytwo{\mathfrak{R}(A)}{-\mathfrak{I}(A)}{\mathfrak{I}(A)}{\mathfrak{R}(A)}.\]

One can check that addition and multiplication of complex matrices can be carried out by addition and multiplication of the corresponding “realified” matrices, i.e.,

    \[\mathscr{R}(A + B) = \mathscr{R}(A) + \mathscr{R}(B), \quad \mathscr{R}(A\cdot B) = \mathscr{R}(A) \cdot \mathscr{R}(B)\]

holds for all complex matrices A and B.

We’ve now converted complex matrix A and vector \omega into real matrix \mathscr{R}(A) and vector \mathscr{R}(\omega). Let’s compare \omega^*A\omega to \mathscr{R}(\omega)^\top\mathscr{R}(A)\mathscr{R}(\omega). A short calculation reveals

    \[\omega^*A\omega = \mathscr{R}(\omega)^\top \mathscr{R}(A)\mathscr{R}(\omega) .\]

Since \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{n}, \sqrt{2}\cdot \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{2n}. Thus, by the variance formula for the real sphere, we get

    \[\Var(\omega^*A\omega) = \Var[(\sqrt{2}\mathscr{R}(\omega))^\top (\mathscr{R}(A)/2)(\sqrt{2}\mathscr{R}(\omega) )] = \frac{4n}{2n+2} \left[ \|\mathscr{R}(A)/2\|_{\rm F}^2 - \frac{1}{8n}(\tr\mathscr{R}(A))^2 \right].\]

A short calculation verifies that \tr \mathscr{R}(A) = 2\tr A and \norm{\mathscr{R}(A)}_{\rm F}^2 = 2\|A\|_{\rm F}^2. Plugging this in, we obtain

    \[\Var(\omega^*A\omega)= \frac{n}{n+1} \left[ \|A\|_{\rm F}^2 - \frac{1}{n}(\tr A)^2  \right].\]

Uniform on the Complex Sphere: Derivation 2 by Haar Integration

The proof by reduction to the real case requires some cumbersome calculations and requires that we have already computed the variance in the real case by some other means. The method of Haar integration is more slick, but it requires some pretty high-power machinery. Haar integration may be a little bit overkill for this problem, but this technique is worth learning as it can handle some truly nasty expected value computations that appear, for example, in quantum information.

We seek to compute

    \[\mathbb{E} [(\omega^*A \omega)^2].\]

The first trick will be to write this expession using a single matrix trace using the tensor (Kronecker) product \otimes. For those unfamiliar with the tensor product, the main properties we will be using are

(6)   \[(A\otimes B) (C\otimes D) = (AB) \otimes (CD), \quad \tr(A\otimes B) = \tr A \cdot \tr B. \]

We saw in the proof of unbiasedness that

    \[\omega^* A \omega = \tr (\omega^*A\omega) = \tr (A \omega\omega^*).\]

Therefore, by (6),

    \[(\omega^*A\omega)^2 = (\tr [A \omega\omega^*])^2 = \tr [A\omega\omega^* \otimes A\omega\omega^*] = \tr [(A\otimes A) (\omega\omega^* \otimes \omega\omega^*)].\]

Thus, to evaluate \mathbb{E}[(\omega^*A\omega)^2], it will be sufficient to evaluate \mathbb{E}[\omega\omega^* \otimes \omega\omega^*]. Forunately, there is a useful formula for these expectation provided by a field of mathematics known as representation theory (see Lemma 1 in this paper):

    \[\mathbb{E}[ \omega\omega^* \otimes \omega\omega^*] = \frac{2n}{n+1} \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}.\]

Here, \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)} is the orthogonal projection onto the space of symmetric two-tensors \operatorname{Sym}^2(\complex^n) = \operatorname{span} \{ v \otimes v : v \in \complex^n \}. Therefore, we have that

    \[\mathbb{E}[(\omega^*A\omega)^2] = \tr [(A\otimes A) \mathbb{E}(\omega\omega^* \otimes \omega\omega^*)] = \frac{2n}{n+1} \tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}].\]

To evalute the trace on the right-hand side of this equation, there is another formula (see Lemma 6 in this paper):

    \[\tr \left[(A\otimes B) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}\right] = \frac{1}{2} \left( \tr(AB) + \tr A \cdot \tr B \right).\]

Therefore, we conclude

    \begin{align*}\Var(\omega^* A \omega) &= \mathbb{E}[(\omega^*A\omega)^2] - (\mathbb{E}[\omega^*A\omega])^2 \\&= \frac{2n}{n+1}\tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \tr A^2 + (\tr A)^2 \right] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \left\|A\right\|_{\rm F}^2 - \frac{1}{n} (\tr A)^2 \right].\end{align*}

Proof of Optimality Properties

In this section, we provide proofs of the two optimality properties.

Optimality: Independent Vectors with Independent Coordinates

Assume A is real and symmetric and suppose that \omega is isotropic (2) with independent coordinates. The isotropy condition

    \[\mathbb{E}[\omega\omega^\top] = I\]

implies that \mathbb{E}[\omega_i\omega_j] = \delta_{ij}, where \delta is the Kronecker symbol. Using this fact, we compute the second moment:

    \begin{align*}\mathbb{E}[ (\omega^*A \omega)^2] &= \mathbb{E}\left[ \left( \sum_{i=1}^n A_{ii} \omega_i^2 +2 \sum_{i<j} A_{ij}\omega_i\omega_j) \right)^2\right] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) \mathbb{E}[\omega_i^2]\mathbb{E}[\omega_j^2] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) .\end{align*}

Thus

    \[\Var(\omega^*A\omega) = \mathbb{E}[ (\omega^*A \omega)^2] - (\mathbb{E}[\omega^* A \omega])^2 = \sum_{i=1}^n A_{ii}^2 (\mathbb{E}[|\omega_i|^4]-1) + 4\sum_{i<j} A_{ij}^2.\]

The variance is minimized by choosing \omega with \mathbb{E} \omega_i^4 as small as possible. Since \mathbb{E} \omega_i^2 = 1, the smallest possible value for \mathbb{E} \omega_i^4 is \mathbb{E} \omega_i^4 = 1, which is obtained by populating \omega with random signs.

Optimality: Independent Vectors

This result appears to have first been proven by Richard Kueng in unpublished work. We use an argument suggested to me by Robert J. Webber.

Assume \mathscr{A} is a class of \field-Hermitian matrices closed under \field-unitary similarity transformations and that \omega is an isotropic random vector (2). Decompose the test vector as

    \[\omega = a \cdot s \quad \text{for} \quad a \in [0,+\infty), \: s \in\{x\in \field^n : x^*x = n \}.\]

First, we shall show that the variance is reduced by replacing s with a vector t drawn uniformly from the sphere

(7)   \[\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}) \le \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega \]

where

(8)   \[\tilde{\omega} = a\cdot t \quad \text{and}\quad t\sim \text{Uniform} \{ x \in \field^n :x^*x = n \} \quad \text{is independent of $a$}. \]

Note that such a t can be generated as t = Qs for a uniformly random \field-unitary matrix Q. Therefore, we have

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[(\tilde{\omega}^*A\tilde{\omega})^2] - (\tr A)^2\right]\\&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Now apply Jensen’s inequality only over the randomness in Q to obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Finally, note that since \mathscr{A} is closed under \field-unitary similarity transformations, the supremum over Q^*AQ for A \in \mathscr{A} is the same as the supremum of A \in \mathscr{A}, so we obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&= \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*As] - (\tr A)^2\right] \\&= \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega).\end{align*}

We have successfully proven (7). This argument is a specialized version of a far more general result which appears as Proposition 4.1 in this paper.

Next, we shall prove

(9)   \[\sup_{A\in\mathscr{A}} \Var(t^*At) \le \sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}), \]

where t is still defined as in (8). Indeed, using the chain rule for variance, we obtain

    \begin{align*}\Var(\tilde{\omega}^*A\tilde{\omega})&= \Var(a^2\cdot t^*At) \\&= \mathbb{E}[\Var(a^2\cdot t^* A t \mid a)] + \Var(\mathbb{E}[a^2\cdot t^* A t \mid a]) \\&= \mathbb{E}[a^4]\Var(t^* A t )+ (\tr A)^2\Var(a^2) \\&\ge \mathbb{E}[a^4]\Var(t^* A t ).\end{align*}

Here, we have used that t is uniform on the sphere and thus \mathbb{E}[t^*At] = \tr A. By definition, a is the length of \omega divided by \sqrt{n}. Therefore,

    \[\mathbb{E}[a^2] = \frac{1}{n}\mathbb{E}[\omega^*\omega] = \frac{1}{n} \mathbb{E}[\tr (\omega\omega^*)] = \frac{1}{n} \tr (\mathbb{E}[\omega\omega^*]) = \frac{\tr I}{n} = 1.\]

Therefore, by Jensen’s inequality,

    \[\mathbb{E}[a^4] = \mathbb{E}[(a^2)^2] \ge (\mathbb{E}[a^2])^2 = 1.\]

Thus

    \[\Var(\tilde{\omega}^*A\tilde{\omega}) \ge \mathbb{E}[a^4]\Var(t^* A t ) \ge \Var(t^*At) \quad \text{for every }A,\]

which proves (9).

Low-Rank Approximation Toolbox: Nyström, Cholesky, and Schur

In the last post, we looked at the Nyström approximation to an N\times N positive semidefinite (psd) matrix A. A special case was the column Nyström approximation, defined to be1We use Matlab index notation to indicate submatrices of A.

(Nys)   \[\hat{A} = A(:,S) \, A(S,S)^{-1} \, A(S,:), \]

where S = \{s_1,\ldots,s_k\} \subseteq \{1,2,\ldots,N\} identifies a subset of k columns of A. Provided k\ll N, this allowed us to approximate all N^2 entries of the matrix A using only the kN entries in columns s_1,\ldots,s_k of A, a huge savings of computational effort!

With the column Nyström approximation presented just as such, many questions remain:

  • Why this formula?
  • Where did it come from?
  • How do we pick the columns s_1,\ldots,s_k?
  • What is the residual A - \hat{A} of the approximation?

In this post, we will answer all of these questions by drawing a connection between low-rank approximation by Nyström approximation and solving linear systems of equations by Gaussian elimination. The connection between these two seemingly unrelated areas of matrix computations will pay dividends, leading to effective algorithms to compute Nyström approximations by the (partial) Cholesky factorization of a positive (semi)definite matrix and an elegant description of the residual of the Nyström approximation as the Schur complement.

Cholesky: Solving Linear Systems

Suppose we want solve the system of linear equations Ax = b, where A is a real N\times N invertible matrix and b is a vector of length N. The standard way of doing this in modern practice (at least for non-huge matrices A) is by means of Gaussian elimination/LU factorization. We factor the matrix A as a product A = LU of a lower triangular matrix L and an upper triangular matrix U.2To make this accurate, we usually have to reorder the rows of the matrix A as well. Thus, we actually compute a factorization PA = LU where P is a permutation matrix and L and U are triangular. The system Ax = b is solved by first solving Ly = b for y and then Ux = y for x; the triangularity of L and U make solving the associated systems of linear equations easy.

For real symmetric positive definite matrix A, a simplification is possible. In this case, one can compute an LU factorization where the matrices L and U are transposes of each other, U = L^\top. This factorization A = LL^\top is known as a Cholesky factorization of the matrix A.

The Cholesky factorization can be easily generalized to allow the matrix A to be complex-valued. For a complex-valued positive definite matrix A, its Cholesky decomposition takes the form A = LL^*, where L is again a lower triangular matrix. All that has changed is that the transpose {}^\top has been replaced by the conjugate transpose {}^*. We shall work with the more general complex case going forward, though the reader is free to imagine all matrices as real and interpret the operation {}^* as ordinary transposition if they so choose.

Schur: Computing the Cholesky Factorization

Here’s one way of computing the Cholesky factorization using recursion. Write the matrix A in block form as

    \[A = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}}. \]

Our first step will be “block Cholesky factorize” the matrix A, factoring A as a product of matrices which are only block triangular. Then, we’ll “upgrade” this block factorization into a full Cholesky factorization.

The core idea of Gaussian elimination is to combine rows of a matrix to introduce zero entries. For our case, observe that multiplying the first block row of A by A_{21}A_{11}^{-1} and subtracting this from the second block row introduces a matrix of zeros into the bottom left block of A. (The matrix A_{11} is a principal submatrix of A and is therefore guaranteed to be positive definite and thus invertible.3To directly see A_{11} is positive definite, for instance, observe that since A is positive definite, x^* A_{11}x = \twobyone{x}{0}^* A\twobyone{x}{0} > 0 for every nonzero vector x.) In matrix language,

    \[\twobytwo{I}{0}{-A_{21}A_{11}^{-1}}{I}\twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} = \twobytwo{A_{11}}{A_{12}}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}}.\]

Isolating A on the left-hand side of this equation by multiplying by

    \[\twobytwo{I}{0}{-A_{21}A_{11}^{-1}}{I}^{-1} = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I}\]

yields the block triangular factorization

    \[A = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I} \twobytwo{A_{11}}{A_{12}}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}}.\]

We’ve factored A into block triangular pieces, but these pieces are not (conjugate) transposes of each other. Thus, to make this equation more symmetrical, we can further decompose

(1)   \[A = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I} \twobytwo{A_{11}}{0}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}} \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I}^*. \]

This is a block version of the Cholesky decomposition of the matrix A taking the form A = LDL^*, where L is a block lower triangular matrix and D is a block diagonal matrix.

We’ve met the second of our main characters, the Schur complement

(Sch)   \[S = A_{22} - A_{21}A_{11}^{-1}A_{12}. \]

This seemingly innocuous combination of matrices has a tendency to show up in surprising places when one works with matrices.4See my post on the Schur complement for more examples. It’s appearance in any one place is unremarkable, but the shear ubiquity of A_{22} - A_{21}A_{11}^{-1}A_{12} in matrix theory makes it deserving of its special name, the Schur complement. To us for now, the Schur complement is just the matrix appearing in the bottom right corner of our block Cholesky factorization.

The Schur complement enjoys the following property:5This property is a consequence of equation (1) together with the conjugation rule for positive (semi)definiteness, which I discussed in this previous post.

Positivity of the Schur complement: If A=\twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} is positive (semi)definite, then the Schur complement S = A_{22} - A_{21}A_{11}^{-1}A_{12} is positive (semi)definite.

As a consequence of this property, we conclude that both A_{11} and S are positive definite.

With the positive definiteness of the Schur complement in hand, we now return to our Cholesky factorization algorithm. Continue by recursively6As always with recursion, one needs to specify the base case. For us, the base case is just that Cholesky decomposition of a 1\times 1 matrix A is A = A^{1/2} \cdot A^{1/2}. computing Cholesky factorizations of the diagonal blocks

    \[A_{11} = L_{11}^{\vphantom{*}}L_{11}^*, \quad S = L_{22}^{\vphantom{*}}L_{22}^*.\]

Inserting these into the block LDL^* factorization (1) and simplifying gives a Cholesky factorization, as desired:

    \[A = \twobytwo{L_{11}}{0}{A_{21}^{\vphantom{*}}(L_{11}^{*})^{-1}}{L_{22}}\twobytwo{L_{11}}{0}{A_{21}^{\vphantom{*}}(L_{11}^{*})^{-1}}{L_{22}}^* =: LL^*.\]

Voilà, we have obtained a Cholesky factorization of a positive definite matrix A!

By unwinding the recursions (and always choosing the top left block A_{11} to be of size 1\times 1), our recursive Cholesky algorithm becomes the following iterative algorithm: Initialize L to be the zero matrix. For j = 1,2,3,\ldots,N, perform the following steps:

  1. Update L. Set the jth column of L:

        \[L(j:N,j) \leftarrow A(j:N,j)/\sqrt{a_{jj}}.\]

  2. Update A. Update the bottom right portion of A to be the Schur complement:

        \[A(j+1:N,j+1:N)\leftarrow A(j+1:N,j+1:N) - \frac{A(j+1:N,j)A(j,j+1:N)}{a_{jj}}.\]

This iterative algorithm is how Cholesky factorization is typically presented in textbooks.

Nyström: Using Cholesky Factorization for Low-Rank Approximation

Our motivating interest in studying the Cholesky factorization was the solution of linear systems of equations Ax = b for a positive definite matrix A. We can also use the Cholesky factorization for a very different task, low-rank approximation.

Let’s first look at things through the lense of the recursive form of the Cholesky factorization. The first step of the factorization was to form the block Cholesky factorization

    \[A = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I} \twobytwo{A_{11}}{0}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}} \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I}^*.\]

Suppose that we choose the top left A_{11} block to be of size k\times k, where k is much smaller than N. The most expensive part of the Cholesky factorization will be the recursive factorization of the Schur complement A_{22} - A_{21}A_{11}^{-1} A_{12}, which is a large matrix of size (N-k)\times (N-k).

To reduce computational cost, we ask the provocative question: What if we simply didn’t factorize the Schur complement? Observe that we can write the block Cholesky factorization as a sum of two terms

(2)   \[A = \twobyone{I}{A_{21}{A_{11}^{-1}}} A_{11} \twobyone{I}{A_{22}{A_{11}^{-1}}}^* + \twobytwo{0}{0}{0}{A_{22}-A_{21}A_{11}^{-1}A_{12}}. \]

We can use the first term of this sum as a rank-k approximation to the matrix A. The low-rank approximation, which we can write out more conveniently as

    \[\hat{A} = \twobyone{A_{11}}{A_{21}} A_{11}^{-1} \onebytwo{A_{11}}{A_{12}} = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{21}A_{11}^{-1}A_{12}},\]

is the column Nyström approximation (Nys) to A associated with the index set S = \{1,2,3,\ldots,k\} and is the final of our three titular characters. The residual of the Nyström approximation is the second term in (2), which is none other than the Schur complement (Sch), padded by rows and columns of zeros:

    \[A - \hat{A} = \twobytwo{0}{0}{0}{A_{22}-A_{21}A_{11}^{-1}A_{12}}.\]

Observe that the approximation \hat{A} is obtained from the process of terminating a Cholesky factorization midway through algorithm execution, so we say that the Nyström approximation results from a partial Cholesky factorization of the matrix A.

Summing things up:

If we perform a partial Cholesky factorization on a positive (semi)definite matrix, we obtain a low-rank approximation known as the column Nyström approximation. The residual of this approximation is the Schur complement, padded by rows and columns of zeros.

The idea of obtaining a low-rank approximation from a partial matrix factorization is very common in matrix computations. Indeed, the optimal low-rank approximation to a real symmetric matrix is obtained by truncating its eigenvalue decomposition and the optimal low-rank approximation to a general matrix is obtained by truncating its singular value decomposition. While the column Nyström approximation is not the optimal rank-k approximation to A (though it does satisfy a weaker notion of optimality, as discussed in this previous post), it has a killer feature not possessed by the optimal approximation:

The column Nyström approximation is formed from only k columns from the matrix A. A column Nyström approximation approximates an N\times N matrix by only reading a fraction of its entries!

Unfortunately there’s not a free lunch here. The column Nyström is only a good low-rank approximation if the Schur complement has small entries. In general, this need not be the case. Fortunately, we can improve our situation by means of pivoting.

Our iterative Cholesky algorithm first performs elimination using the entry in position (1,1) followed by position (2,2) and so on. There’s no need to insist on this exact ordering of elimination steps. Indeed, at each step of the Cholesky algorithm, we can choose whichever diagonal position (j,j) that we want to perform elimination. The entry we choose to perform elimination with is called a pivot.

Obtaining good column Nyström approximations requires identifying the a good choice for the pivots to reduce the size of the entries of the Schur complement at each step of the algorithm. With general pivot selection, an iterative algorithm for computing a column Nyström approximation by partial Cholesky factorization proceeds as follows: Initialize an N\times k matrix F to store the column Nyström approximation \hat{A} = FF^*, in factored form. For j = 1,2,\ldots,k, perform the following steps:

  1. Select pivot. Choose a pivot s_j.
  2. Update the approximation. F(:,j) \leftarrow A(:,s_j) / \sqrt{a_{s_js_j}}.
  3. Update the residual. A \leftarrow A - \frac{A(:,s_j)A(s_j,:)}{a_{s_js_j}}.

This procedure results in the Nyström approximation (Nys) with column set S = \{s_1,\ldots,s_k\}:

    \[\hat{A} = FF^* = A(:,S) \, A(S,S)^{-1} \, A(S,:).\]

The pivoted Cholesky steps 1–3 requires updating the entire matrix A at every step. With a little more cleverness, we can optimize this procedure to only update the entries of the matrix A we need to form the approximation \hat{A} = FF^*. See Algorithm 2 in this preprint of my coauthors and I for details.

How should we choose the pivots? Two simple strategies immediately suggest themselves:

  • Uniformly random. At each step j, select s_j uniformly at random from among the un-selected pivot indices.
  • Greedy.7The greedy pivoting selection is sometimes known as diagonal pivoting or complete pivoting in the numerical analysis literature. At each step j, select s_j according to the largest diagonal entry of the current residual A:

        \[s_j = \argmax_{1\le k\le N} a_{kk}.\]

The greedy strategy often (but not always) works well, and the uniformly random approach can work surprisingly well if the matrix A is “incoherent”, with all rows and columns of the matrix possessing “similar importance”.

Despite often working fairly well, both the uniform and greedy schemes can fail significantly, producing very low-quality approximations. My research (joint with Yifan Chen, Joel A. Tropp, and Robert J. Webber) has investigated a third strategy striking a balance between these two approaches:

  • Diagonally weighted random. At each step j, select s_j at random according to the probability weights based on the current diagonal of the matrix

        \[\mathbb{P} \{ s_j = k \} = \frac{a_{kk}}{\operatorname{tr} A}.\]

Our paper provides theoretical analysis and empirical evidence showing that this diagonally-weighted random pivot selection (which we call randomly pivoted Cholesky aka RPCholesky) performs well at approximating all positive semidefinite matrices A, even those for which uniform random and greedy pivot selection fail. The success of this approach can be seen in the following examples (from Figure 1 in the paper), which shows RPCholesky can produce much smaller errors than the greedy and uniform methods.

Conclusions

In this post, we’ve seen that a column Nyström approximation can be obtained from a partial Cholesky decomposition. The residual of the approximation is the Schur complement. I hope you agree that this is a very nice connection between these three ideas. But beyond its mathematical beauty, why do we care about this connection? Here are my reasons for caring:

  • Analysis. Cholesky factorization and the Schur complement are very well-studied in matrix theory. We can use known facts about Cholesky factorization and Schur complements to prove things about the Nyström approximation, as we did when we invoked the positivity of the Schur complement.
  • Algorithms. Cholesky factorization-based algorithms like randomly pivoted Cholesky are effective in practice at producing high-quality column Nyström approximations.

On a broader level, our tale of Nyström, Cholesky, and Schur demonstrates that there are rich connections between low-rank approximation and (partial versions of) classical matrix factorizations like LU (with partial pivoting), Cholesky, QR, eigendecomposition, and SVD for to full-rank matrices. These connections can be vital to analyzing low-rank approximation algorithms and developing improvements.

Low-Rank Approximation Toolbox: Nyström Approximation

Welcome to a new series for this blog, Low-Rank Approximation Toolbox. As I discussed in a previous post, many matrices we encounter in applications are well-approximated by a matrix with a small rank. Efficiently computing low-rank approximations has been a major area of research, with applications in everything from classical problems in computational physics and signal processing to trendy topics like data science. In this series, I want to explore some broadly useful algorithms and theoretical techniques in the field of low-rank approximation.

I want to begin this series by talking about one of the fundamental types of low-rank approximation, the Nyström approximation of a N\times N (real symmetric or complex Hermitian) positive semidefinite (psd) matrix A. Given an arbitrary N\times k “test matrix” \Omega, the Nyström approximation is defined to be

(1)   \[A\langle \Omega\rangle := A\Omega \, (\Omega^*A\Omega)^{-1} \, \Omega^*A. \]

This formula is sensible whenever \Omega^*A\Omega is invertible; if \Omega^*A\Omega is not invertible, then the inverse {}^{-1} should be replaced by the Moore–Penrose pseudoinverse {}^\dagger. For simplicity, I will assume that \Omega^* A \Omega is invertible in this post, though everything we discuss will continue to work if this assumption is dropped. I use {}^* to denote the conjugate transpose of a matrix, which agrees with the ordinary transpose {}^\top for real matrices. I will use the word self-adjoint to refer to a matrix which satisfies A=A^*.

The Nyström approximation (1) answers the question

What is the “best” rank-k approximation to the psd matrx A provided only with the matrix–matrix product A\Omega, where \Omega is a known N\times k matrix (k\ll N)?

Indeed, if we let Y = A\Omega, we observe that the Nyström approximation can be written entirely using Y and \Omega:

    \[A\langle \Omega\rangle = Y \, (\Omega^* Y)^{-1}\, Y^*.\]

This is central advantage of the Nyström approximation: to compute it, the only access to the matrix A I need is the ability to multiply the matrices A and \Omega. In particular, I only need a single pass over the entries of A to compute the Nyström approximation. This allows the Nyström approximation to be used in settings when other low-rank approximations wouldn’t work, such as when A is streamed to me as a sum of matrices that must be processed as they arrive and then discarded.

Choosing the Test Matrix

Every choice of N\times k test matrix \Omega defines a rank-k Nyström approximation1Actually, A\langle \Omega\rangle is only rank at most k. For this post, we will use rank-k to mean “rank at most k“. A\langle \Omega\rangle by (1). Unfortunately, the Nyström approximation won’t be a good low-rank approximation for every choice of \Omega. For an example of what can go wrong, if we pick \Omega to have columns selected from the eigenvectors of A with small eigenvalues, the approximation A\langle \Omega\rangle will be quite poor.

The very best choice of \Omega would be the k eigenvectors associated with the largest eigenvalues. Unfortunately, computing the eigenvectors to high accuracy is computationally costly. Fortunately, we can get decent low-rank approximations out of much simpler \Omega‘s:

  1. Random: Perhaps surprisingly, we get a fairly low-rank approximation out of just choosing \Omega to be a random matrix, say, populated with statistically independent standard normal random entries. Intuitively, a random matrix is likely to have columns with meaningful overlap with the large-eigenvalue eigenvectors of A (and indeed with any k fixed orthonormal vectors). One can also pick more exotic kinds of random matrices, which can have computational benefits.
  2. Random then improve: The more similar the test matrix \Omega is to the large-eigenvalue eigenvectors of A, the better the low-rank approximation will be. Therefore, it makes sense to use the power method (usually called subspace iteration in this context) to improve a random initial test matrix \Omega_{\rm init} to be closer to the eigenvectors of A.2Even better than subspace iteration is block Krylov iteration. See section 11.6 of the following survey for details.
  3. Column selection: If \Omega consists of columns i_1,i_2,\ldots,i_k of the identity matrix, then A\Omega just consists of columns i_1,\ldots,i_k of A: In MATLAB notation,

        \[A(:,\{i_1,\ldots,i_k\}) = A\Omega \quad \text{for}\quad \Omega = I(:,\{i_1,i_2,\ldots,i_k\}).\]

    This is highly appealing as it allows us to approximate the matrix A by only reading a small fraction of its entries (provided k\ll N)! Producing a good low-rank approximation requires selecting the right column indices i_1,\ldots,i_k (usually under the constraint of reading a small number of entries from A). In my research with Yifan Chen, Joel A. Tropp, and Robert J. Webber, I’ve argued that the most well-rounded algorithm for this task is a randomly pivoted partial Cholesky decomposition.

The Projection Formula

Now that we’ve discussed the choice of test matrix, we shall explore the quality of the Nyström approximation as measured by the size of the residual A - A\langle \Omega\rangle. As a first step, we shall show that the residual is psd. This means that A\langle \Omega\rangle is an underapproximation to A.

The positive semidefiniteness of the residual follows from the following projection formula for the Nyström approximation:

    \[A\langle \Omega \rangle = A^{1/2} P_{A^{1/2}\Omega} A^{1/2}.\]

Here, P_{A^{1/2}\Omega} denotes the the orthogonal projection onto the column space of the matrix A^{1/2}\Omega. To deduce the projection formula, we break down A as A = A^{1/2}\cdot A^{1/2} in (1):

    \[A\langle \Omega\rangle = A^{1/2} \left( A^{1/2}\Omega \left[ (A^{1/2}\Omega)^* A^{1/2}\Omega \right]^{-1} (A^{1/2}\Omega)^* \right) A^{1/2}.\]

The fact that the paranthesized quantity is P_{A^{1/2}\Omega} can be verified in a variety of ways, such as by QR factorization.3Let A^{1/2} \Omega = QR, where Q has orthonormal columns and R is square and upper triangular. The orthogonal projection is P_{A^{1/2}\Omega} = QQ^*. The parenthesized expression is (QR)(R^*Q^*QR)^{-1}R^*Q^* = QRR^{-1}R^{-*}R^*Q^* = QQ^* = P_{A^{1/2}\Omega}.

With the projection formula in hand, we easily obtain the following expression for the residual:

    \[A - A\langle \Omega\rangle = A^{1/2} (I - P_{A^{1/2}\Omega}) A^{1/2}.\]

To show that this residual is psd, we make use of the conjugation rule.

Conjugation rule: For a matrix B and a self-adjoint matrix H, if H is psd then B^*HB is psd. If B is invertible, then the converse holds: if B^*HB is psd, then H is psd.

The matrix I - P_{A^{1/2}\Omega} is an orthogonal projection and therefore psd. Thus, by the conjugation rule, the residual of the is Nyström approximation is psd:

    \[A - A\langle \Omega\rangle = \left(A^{1/2}\right)^* (I-P_{A^{1/2}\Omega})A^{1/2} \quad \text{is psd}.\]

Optimality of the Nyström Approximation

There’s a question we’ve been putting off that can’t be deferred any longer:

Is the Nyström approximation actually a good low-rank approximation?

As we discussed earlier, the answer to this question depends on the test matrix \Omega. Different choices for \Omega give different approximation errors. See the following papers for Nyström approximation error bounds with different choices of \Omega. While the Nyström approximation can be better or worse depending on the choice of \Omega, what is true about Nyström approximation for every choice of \Omega is that:

The Nyström approximation A\langle \Omega\rangle is the best possible rank-k approximation to A given the information A\Omega.

In precise terms, I mean the following:

Theorem: Out of all self-adjoint matrices \hat{A} spanned by the columns of A\Omega with a psd residual A - \hat{A}, the Nyström approximation has the smallest error as measured by either the spectral or Frobenius norm (or indeed any unitarily invariant norm, see below).

Let’s break this statement down a bit. This result states that the Nyström approximation is the best approximation \hat{A} to A under three conditions:

  1. \hat{A} is self-adjoint.
  2. \hat{A} is spanned by the columns of A\Omega.

I find these first two requirements to be natural. Since A is self-adjoint, it makes sense to require our approximation \hat{A} to be as well. The stipulation that \hat{A} is spanned by the columns A\Omega seems like a very natural requirement given we want to think of approximations which only use the information A\Omega. Additionally, requirement 2 ensures that \hat{A} has rank at most k, so we are really only considering low-rank approximations to A.

The last requirement is less natural:

  1. The residual A - \hat{A} is psd.

This is not an obvious requirement to impose on our approximation. Indeed, it was a nontrivial calculation using the projection formula to show that the Nyström approximation itself satisfies this requirement! Nevertheless, this third stipulation is required to make the theorem true. The Nyström approximation (1) is the best “underapproximation” to the matrix A to in the span of A\Omega.

Intermezzo: Unitarily Invariant Norms and the Psd Order

To prove our theorem about the optimality of the Nyström approximation, we shall need two ideas from matrix theory: unitarily invariant norms and the psd order. We shall briefly describe each in turn.

A norm \left\|\cdot\right\|_{\rm UI} defined on the set of N\times N matrices is said to be unitarily invariant if the norm of a matrix does not change upon left- or right-multiplication by a unitary matrix:

    \[\left\|UBV\right\|_{\rm UI} = \left\|B\right\|_{\rm UI} \quad \text{for all unitary matrices $U$ and $V$.}\]

Recall that a unitary matrix U (called a real orthogonal matrix if U is real-valued) is one that obeys U^*U = UU^* = I. Unitary matrices preserve the Euclidean lengths of vectors, which makes the class of unitarily invariant norms highly natural. Important examples include the spectral, Frobenius, and nuclear matrix norms:

The unitarily invariant norm of a matrix B depends entirely on its singular values \sigma(B). For instance, the spectral, Frobenius, and nuclear norms take the forms

    \begin{align*}\left\|B\right\|_{\rm op} &= \sigma_1(B),& &\text{(spectral)} \\\left\|B\right\|_{\rm F} &= \sqrt{\sum_{j=1}^N (\sigma_j(B))^2},& &\text{(Frobenius)} \\\left\|B\right\|_{*} &=\sum_{j=1}^N \sigma_j(B)).& &\text{(nuclear)}\end{align*}

In addition to being entirely determined by the singular values, unitarily invariant norms are non-decreasing functions of the singular values: If the jth singular value of B is larger than the jth singular value of C for 1\le j\le N, then \left\|B\right\|_{\rm UI}\ge \left\|C\right\|_{\rm UI} for every unitarily invariant norm \left\|\cdot\right\|_{\rm UI}. For more on unitarily invariant norms, see this short and information-packed blog post from Nick Higham.

Our second ingredient is the psd order (also known as the Loewner order). A self-adjoint matrix A is larger than a self-adjoint matrix H according to the psd order, written A\succeq H, if the difference A-H is psd. As a consequence, A\succeq 0 if and only if A is psd, where 0 here denotes the zero matrix of the same size as A. Using the psd order, the positive semidefiniteness of the Nyström residual can be written as A - A\langle \Omega\rangle \succeq 0.

If A and H are both psd matrices and A is larger than H in the psd order, A\succeq H\succeq 0, it seems natural to expect that A is larger than H in norm. Indeed, this intuitive statement is true, at least when one restricts oneself to unitarily invariant norms.

Psd order and norms. If A\succeq H\succeq 0, then \left\|A\right\|_{\rm UI} \ge \left\|H\right\|_{\rm UI} for every unitarily invariant norm \left\|\cdot\right\|_{\rm UI}.

This fact is a consequence of the following observations:

  • If A\succeq H, then the eigenvalues of A are larger than H in the sense that the jth largest eigenvalue of A is larger than the jth largest eigenvalue of H.
  • The singular values of a psd matrix are its eigenvalues.
  • Unitarily invariant norms are non-decreasing functions of the singular values.

Optimality of the Nyström Approximation: Proof

In this section, we’ll prove our theorem showing the Nyström approximation is the best low-rank approximation satisfying properties 1, 2, and 3. To this end, let \hat{A} be any matrix satisfying properties 1, 2, and 3. Because of properties 1 (self-adjointness) and 2 (spanned by columns of A\Omega), \hat{A} can be written in the form

    \[\hat{A} = A\Omega \, T \, (A\Omega)^* = A \Omega \, T \, \Omega^*A,\]

where T is a self-adjoint matrix. To make this more similar to the projection formula, we can factor A^{1/2} on both sides to obtain

    \[\hat{A} = A^{1/2} (A^{1/2}\Omega\, T\, \Omega^*A^{1/2}) A^{1/2}.\]

To make this more comparable to the projection formula, we can reparametrize by introducing a matrix Q with orthonormal columns with the same column space as A^{1/2}\Omega. Under this parametrization, \hat{A} takes the form

    \[\hat{A} = A^{1/2} \,QMQ^*\, A^{1/2} \quad \text{where} \quad M\text{ is self-adjoint}.\]

The residual of this approximation is

(2)   \[A - \hat{A} = A^{1/2} (I - QMQ^*)A^{1/2}. \]

We now make use of the of conjugation rule again. To simplify things, we make the assumption that A is invertible. (As an exercise, see if you can adapt this argument to the case when this assumption doesn’t hold!) Since A - \hat{A}\succeq 0 is psd (property 3), the conjugation rule tells us that

    \[I - QMQ^*\succeq 0.\]

What does this observation tell us about M? We can apply the conjugation rule again to conclude

    \[Q^*(I - QMQ^*)Q = Q^*Q - (Q^*Q)M(Q^*Q) = I-M \succeq 0.\]

(Notice that Q^*Q = I since Q has orthonormal columns.)

We are now in a place to show that A - \hat{A}\succeq A - A\langle \Omega\rangle. Indeed,

    \begin{align*}A - \hat{A} - (A-A\langle \Omega\rangle) &= A\langle\Omega\rangle - \hat{A} \\&= A^{1/2}\underbrace{QQ^*}_{=P_{A^{1/2}\Omega}}A^{1/2} - A^{1/2}QMQ^*A^{1/2} \\&=A^{1/2}Q(I-M)Q^*A^{1/2}\\&\succeq 0.\end{align*}

The second line is the projection formula together with the observation that P_{A^{1/2\Omega}} = QQ^* and the last line is the conjugation rule combined with the fact that I-M is psd. Thus, having shown

    \[A - \hat{A} \succeq A - A\langle\Omega\rangle \succeq 0,\]

we conclude

    \[\|A - \hat{A}\|_{\rm UI} \ge \left\|A - A\langle \Omega\rangle\right\|_{\rm UI} \quad \text{for every unitarily invariant norm $\left\|\cdot\right\|_{\rm UI}$}.\]

Note to Self: Hanson–Wright Inequality

This post is part of a new series for this blog, Note to Self, where I collect together some notes about an idea related to my research. This content may be much more technical than most of the content of this blog and of much less wide interest. My hope in sharing this is that someone will find this interesting and useful for their own work.


This post is about a fundamental tool of high-dimensional probability, the Hanson–Wright inequality. The Hanson–Wright inequality is a concentration inequality for quadratic forms of random vectors—that is, expressions of the form x^\top A x where x is a random vector. Many statements of this inequality in the literature have an unspecified constant c > 0; our goal in this post will be to derive a fairly general version of the inequality with only explicit constants.

The core object of the Hanson–Wright inequality is a subgaussian random variable. A random variable Y is subgaussian if the probability it exceeds a threshold t in magnitude decays as

(1)   \[\mathbb{P}\{|Y|\ge t\} \le \mathrm{e}^{-t^2/a} \quad \text{for some $a>0$ and for all sufficiently large $t$.} \]

The name subgaussian is appropriate as the tail probabilities of Gaussian random variables exhibit the same square-exponential decrease \mathrm{e}^{-t^2/a}.

A (non-obvious) fact is that if Y is subgaussian in the sense (1) and centered (\mathbb{E} Y = 0), then Y‘s cumulant generating function (cgf)

    \[\xi_Y(t) := \log \mathbb{E} \exp(tY).\]

is subquadratic: There is a constant c > 0 (independent of Y and a), for which

(2)   \[\xi_Y(t) \le ca t^2 \quad \text{for all $t\in\mathbb{R}$}. \]

Moreover,1See Proposition 2.5.2 of Vershynin’s High-Dimensional Probability. a subquadratic cgf (2) also implies the subgaussian tail property (1), with a different parameter a > 0.

Since properties (1) and (2) are equivalent (up to a change in the parameter a), we are free to fix a version of property (2) as our definition for a (centered) subgaussian random variable.

Definition (subgaussian random variable): A centered random variable X is said to be v-subgaussian or subgaussian with variance proxy v if its cgf is subquadratic:

(3)   \[\xi_{x}(t) \le\frac{1}{2} vt^2 \quad \text{for all $t\in\mathbb{R}$.} \]

For instance, a mean-zero Gaussian random variable X with variance \sigma^2 has cgf

(4)   \[ \xi_X(t) = \frac{1}{2} \sigma^2 t^2,  \]

and is thus subgaussian with variance proxy v = \sigma^2 equal to its variance.

Here is a statement of the Hanson–Wright inequality as it typically appears with unspecified constants (see Theorem 6.2.1 of Vershynin’s High-Dimensional Probability):

Theorem (Hanson–Wright): Let x be a random vector with independent centered v-subgaussian entries and let A be a square matrix. Then

    \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E} \left[x^\top A x\right]\right|\ge t \right\} \le 2\exp\left(- \frac{c\cdot t^2}{v^2\left\|A\right\|_{\rm F}^2 + v\left\|A\right\|t} \right),\]

where c>0 is a constant (not depending on v, x, t, or A).2Here, \|\cdot\|_{\rm F} and \|\cdot\| denote the Frobenius and spectral norms.

This type of concentration is exactly the same type as provided by Bernstein’s inequality (which I discussed in my post on concentration inequalities). In particular, for small deviations t, the tail probabilities decay are subgaussian with variance proxy \approx v^2\left\|A\right\|_{\rm F}^2:

    \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E}\left[x^\top Ax\right]\right|\ge t \right\} \stackrel{\text{small $t$}}{\lessapprox} 2\exp\left(- \frac{c\cdot t^2}{v^2\left\|A\right\|_{\rm F}^2} \right)\]

For large deviations t, this switches to subexponential tail probabilities with decay rate \approx v\|A\|:

    \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E}\left[x^\top Ax\right]\right|\ge t \right\} \stackrel{\text{large $t$}}{\lessapprox} 2\exp\left(- \frac{c\cdot t}{v\|A\|} \right).\]

Mediating these two parameter regimes are the size of the matrix A, as measured by its Frobenius and spectral norms, and the degree of subgaussianity of x, measured by the variance proxy v.

Diagonal-Free Hanson–Wright

Now we come to a first version of the Hanson–Wright inequality with explicit constants, first for a matrix which is diagonal-free—that is, having all zeros on the diagonal. I obtained this version of the inequality myself, though I am very sure that this version of the inequality or an improvement thereof appears somewhere in the literature.

Theorem (Hanson–Wright, explicit constants, diagonal-free): Let x random vector with independent centered v-subguassian entries and let A be a diagonal-free square matrix. Then we have the cgf bound

    \[\xi_{x^\top Ax}(t) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.\]

As a consequence, we have the concentration bound

    \[\mathbb{P} \{ x^\top A x \ge t \} \le \exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right).\]

Let us begin proving this result. Our proof will follow the same steps as Vershynin’s proof in High-Dimensional Probability (which in turn is adapted from an article by Rudelson and Vershynin), but taking care to get explicit constants. Unfortunately, proving all of the relevant tools from first principles would easily triple the length of this post, so I make frequent use of results from the literature.

We begin by the decoupling bound (Theorem 6.1.1 in Vershynin’s High-Dimensional Probability), which allows us to replace one x with an independent copy \tilde{x} at the cost of a factor of four:

(5)   \[\xi_{x^\top Ax}(t) \le \xi_{\tilde{x}^\top Ax}(4t). \]

We seek to compare the bilinear form \tilde{x}^\top Ax to the Gaussian bilinear form \tilde{g}^\top Ag where \tilde{g} and g are independent standard Gaussian vectors. We begin with the following cgf bound for the Gaussian quadratic form g^\top Ag:

    \[\xi_{g^\top Ag}(t) \le \frac{\left\|A\right\|_{\rm F}^2 \, t^2}{1-2\|A\|\, t}.\]

This equation is the result of Example 2.12 in Boucheron, Lugosi, and Massart’s Concentration Inequalities. By applying this result to the Hermitian dilation of A in A‘s place, one obtains a similar result for the decoupled bilinear form \tilde{g}^\top Ag:

(6)   \[\xi_{\tilde{g}^\top Ag}(t) \le \frac{\left\|A\right\|_{\rm F}^2 \, t^2}{2(1-\|A\|\, t)}. \]

We now seek to compare \xi_{\tilde{x}^\top Ax}(t) to \xi_{\tilde{g}^\top Ag}(t). To do this, we first evaluate the cgf of \tilde{x}^\top Ax only over the randomness in \tilde{x}. Since we’re only taking an expectation over the random variable \tilde{x}, we can apply the subquadratic tail condition (3) to obtain

(7)   \[\log \mathbb{E}_{\tilde{x}} \exp(t \, \tilde{x}^\top Ax) = \sum_{i=1}^n \log \mathbb{E}_{\tilde{x}} \exp(t \,\tilde{x}_i (Ax)_i) \le  \frac{1}{2} v \left(\sum_{i=1}^n (Ax)_i^2\right)t^2 \le \frac{1}{2} v\left\|Ax\right\|^2 \, t^2. \]

Now we perform a similar computation for the quantity \tilde{g}^\top Ax in which \tilde{x} has been replaced by the Gaussian vector \tilde{g}:

    \[\log \mathbb{E}_{\tilde{g}} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax) = \frac{1}{2} v \left\|Ax\right\|^2 \, t^2.\]

We stress that this is an equality since the cgf of a Gaussian random variable is given by (4). Thus we can substitute the left-hand side of the above display into the right-hand side of (7), yielding

(8)   \[\log \mathbb{E}_{\tilde{x}} \exp(t \, \tilde{x}^\top Ax) \le \log \mathbb{E}_{\tilde{g}} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax). \]

We now perform this same trick again using the randomness in x:

(9)   \[\log \mathbb{E}_{\tilde{g},x} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax) \le \log \mathbb{E}_{\tilde{g}} \exp \left(\frac{1}{2} v^2 \left\|A^\top \tilde{g}\right\|^2t^2\right) = \log \mathbb{E}_{\tilde{g},g} \exp(v t \, \tilde{g}^\top Ag). \]

Packaging up (8) and (9) gives

(10)   \[\xi_{\tilde{x}^\top Ax}(t)\le \xi_{\tilde{g}^\top Ag}(vt). \]

Combining all these results (5), (6), and (10), we obtain

    \[\xi_{x^\top Ax}(t) \le \xi_{\tilde{x}^\top Ax}(4t) \le \xi_{\tilde{g}^\top Ag}(4vt) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.\]

This cgf implies the desired probability bound as a consequence of the following fact (see Boucheron, Lugosi, and Massart’s Concentration Inequalities page 29 and Exercise 2.8):

Fact (Bernstein concentration from Bernstein cgf bound): Suppose that a random variable X satisfies the cgf bound \xi_X(t) \le \tfrac{vt^2}{2(1-ct)} for 0 < t < 1/c. Then

    \[\mathbb{P} \left\{ X\ge t \right\} \le \exp\left( -\frac{t^2/2}{v+ct} \right).\]

General Hanson–Wright

Now, here’s a more general result (with worse constants) which permits the matrix A to possess a diagonal.

Theorem (Hanson–Wright, explicit constants): Let x random vector with independent centered v-subguassian entries and let A be an arbitrary square matrix. Then we have the cgf bound

    \[\xi_{x^\top Ax-\mathbb{E} [x^\top A x]}(t) \le \frac{40v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)}.\]

As a consequence, we have the concentration bound

    \[\mathbb{P} \{ x^\top A x-\mathbb{E} [x^\top A x] \ge t \} \le \exp\left( -\frac{t^2/2}{40v^2 \left\|A\right\|_{\rm F}^2+8v\left\|A\right\|t} \right).\]

Decompose the matrix A = D+F into its diagonal and off-diagonal portions. For any two random variables X and Y (possibly highly dependent), we can bound the cgf of their sum using the following “union bound”:

(11)   \begin{align*} \xi_{X+Y}(t) &= \log \mathbb{E} \left[\exp(tX)\exp(tY)\right] \\&\le \log \left(\left[\mathbb{E} \exp(2tX)\right]^{1/2}\left[\mathbb{E}\exp(2tY)\right]^{1/2}\right) \\&=\frac{1}{2} \xi_X(2t) + \frac{1}{2}\xi_Y(2t). \end{align*}

The two equality statements are the definition of the cumulant generating function and the inequality is Cauchy–Schwarz.

Using the “union bound”, it is sufficient to obtain bounds for the cgfs of the diagonal and off-diagonal parts x^\top D x - \mathbb{E}[x^\top Ax] and x^\top F x. We begin with the diagonal part. We compute

(12)   \begin{align*}\xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(t) &= \log \mathbb{E} \exp\left(t \sum_{i=1}^n A_{ii}(x_i^2 - \mathbb{E}[x_i^2]) \right) \\ &= \sum_{i=1}^n  \log \mathbb{E} \exp\left((t A_{ii})\cdot(x_i^2 - \mathbb{E}[x_i^2]) \right). \end{align*}

For the cgf of x_i^2 - \mathbb{E}[x_i^2], we use the following bound, taken from Appendix B of the following paper:

    \[\log \mathbb{E} \exp\left(t(x_i^2 - \mathbb{E}[x_i^2]) \right) \le \frac{8v^2t^2}{1-2v|t|}.\]

Substituting this result into (12) gives

(13)   \[\xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(t) \le \sum_{i=1}^n \frac{8v^2|A_{ii}|^2t^2}{1-2v|A_{ii}|t} \le \frac{8v^2\|A\|_{\rm F}^2t^2}{1-2v\|A\|t}\quad \text{for $t>0$}. \]

For the second inequality, we used the facts that \max_i |A_{ii}| \le \|A\| and \sum_i |A_{ii}|^2 \le \|A\|_{\rm F}^2.

We now look at the off-diagonal part x^\top F x. We use a version of the decoupling bound (5) where we compare x^\top F x to \tilde{x}^\top A x, where we’ve both replaced one copy of x with an independent copy and reinstated the diagonal of A (see Remark 6.1.3 in Vershynin’s High-Dimensional Probability):

    \[\xi_{x^\top F x}(t) \le \xi_{\tilde{x}^\top Ax}(4t).\]

We can now just repeat the rest of the argument for the diagonal-free Hanson–Wright inequality, yielding the same conclusion

(14)   \[ \xi_{x^\top Fx}(t) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.  \]

Combining (11), (13), and (14), we obtain

    \begin{align*}\xi_{x^\top Ax-\mathbb{E} [x^\top A x]} &\le \frac{1}{2} \xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(2t) + \frac{1}{2} \xi_{x^\top Fx}(2t) \\&\le \frac{8v^2\|A\|_{\rm F}^2t^2}{2(1-4v\|A\|t)} + \frac{32v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left|A\right|t)} \\&\le \frac{8v^2\|A\|_{\rm F}^2t^2}{2(1-4v\|A\|t)} + \frac{32v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)} \\&\le \frac{40v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)}.\end{align*}

As with above, this cgf bound implies the desired probability bound.

Chebyshev Polynomials

This post is co-written by my brother, Aidan Epperly, for the second Summer of Math Exposition (SoME2).


Let’s start with a classical problem: connect-the-dots. As we know from geometry, any two points in the plane are connected by one and only one straight line:

But what if we have more than two points? How should we connect them? One natural way is by parabola. Any three points (with distinct x coordinates) are connected by one and only one parabola ax^2+bx+c:

And we can keep extending this. Any n+1 points1The degree of the polynomial is one less than the number of points because a degree-n polynomial is described by n+1 coefficients. For instance, a degree-two parabola ax^2+bx+c has three coefficients a, b, and c. (with distinct x coordinates) are connected by a unique degree-n polynomial a_n x^n + a_{n-1}x^{n-1} + \cdots + a_1 x+a_0:

This game of connect-the-dots with polynomials is known more formally as polynomial interpolation. We can use polynomial interpolation to approximate functions. For instance, we can approximate the function \sin(x) on the interval [-\pi,\pi] to visually near-perfect accuracy by connecting the dots between seven points (-\pi,\sin(-\pi)), (-\tfrac{2}{3}\pi,\sin(-\tfrac{2}{3}\pi)),\ldots ,(\pi,\sin(\pi)):

But something very peculiar happens when we try and apply this trick to the specially chosen function R(x) = 1/(1+25x^2) on the interval [-1,1]:

Unlike \sin(x), the polynomial interpolant for R(x) is terrible! What’s going on? Why doesn’t polynomial interpolation work here? Can we fix it? The answer to the last question is yes and the solution is Chebyshev polynomials.

Reverse-Engineering Chebyshev

The failure of polynomial interpolation for R(x) is known as Runge’s phenomenon after Carl Runge who discovered this curious behavior in 1901. The function R(x) is called the Runge function. Our goal is to find a fix for polynomial interpolation which crushes the Runge phenomenon, allowing us to reliably approximate every sensible2A famous theorem of Faber states that there does not exist any set of points through which the polynomial interpolants converge for every continuous function. This is not as much of a problem as it may seem. As the famous Weierstrass function shows, arbitrary continuous functions can be very weird. If we restrict ourselves to nicer functions, such as Lipschitz continuous functions, there does exist a set of points through which the polynomial interpolant always converges to the underlying function. Thus, in this senses, it is possible to crush the Runge phenomenon. function with polynomial interpolation.

Carl Runge

Let’s put on our thinking caps and see if we can discover the fix for ourselves. In order to discover a fix, we must first identify the problem. Observe that the polynomial interpolant is fine near the center of the interval; it only fails near the boundary.

This leads us to a guess for what the problem might be; maybe we need more interpolation points near the boundaries of the interval. Indeed, tipping our hand a little bit, this turns out to be the case. For instance, connecting the dots for the following set of “mystery points” clustered at the endpoints works just fine:

Let’s experiment a little and see if we can discover a nice set of interpolation points, which we will call x_0,x_1,\ldots,x_n, like this for ourselves. We’ll assume the interpolation points are given by a function x_j = g(j/n) so we can form the polynomial interpolant for any desired polynomial degree n.3Technically, we should insist on the function g(\cdot) being \textit{injective} so that the points g(0),g(1/n),\ldots,g(1) are guaranteed to be distinct. For instance, if we pick g(t) = 2t^2-1, the points look like this:

Equally spaced points j/n (shown on vertical axis) give rise to
non-equally spaced points g(j/n) (shown on horizontal axis)

How should we pick the function g(\cdot)? First observe that, even for the Runge function, equally spaced interpolation points are fine near the center of the interval. We thus have at least two conditions for our desired n+1 interpolation points:

  1. The interior points should maintain their spacing of roughly 2/n.
  2. The points must cluster near both boundaries.

As a first attempt let’s divide the interval into thirds and halve the spacing of points except in the middle third. This leads to the function

    \begin{equation*}g(x) = \begin{cases} -1+x, & 0\le x < \tfrac{1}{3}, \\-\tfrac{2}{3} + 4\left( x-\tfrac{1}{3}\right), & \tfrac{1}{3} \le x < \tfrac{2}{3}, \\\tfrac{2}{3} + \left( x-\tfrac{2}{3}\right), & \tfrac{2}{3} \le x \le 1.\end{cases}\end{equation*}

These interpolation points initially seem promising, even successfully approximating the Runge function itself.

Unfortunately, this set of points fails when we consider other functions. For instance, if we use the Runge-like function S(x) = 1/(1+900x^2), we see that these interpolation points now lead to a failure to approximate the function at the middle of the interval, even if we use a lot of interpolation points!

Maybe the reason this set of interpolation points didn’t work is that the points are too close at the endpoints. Or maybe we should have divided the interval as quarter–half–quarter rather than thirds. There are lots of variations of this strategy for choosing points to explore and all of them eventually lead to failure on some Runge-flavored example. We need a fundamentally different strategy then making the points a times closer within distance b of the endpoints.

Let’s try a different approach. The closeness of the points at the endpoints is determined by the slope of the function g at 0 and 1. The smaller that |g'(0)| and |g'(1)| are, the more clustered the points will be. For instance,

    \begin{equation*}g'(0) = g'(1) = 2 \quad \text{for equally spaced points}.\end{equation*}

When we halved the distance between points, we instead had

    \begin{equation*}g'(0) = g'(1) = 1 \quad \text{when points at ends were twice as close together}.\end{equation*}

So if we want the points to be much more clustered together, it is natural to require

    \begin{equation*}g'(0) = g'(1) = 0. \quad \text{(new requirement)}\end{equation*}

It also makes sense for the function g(\cdot) to cluster points equally near both endpoints, since we see no reason to preference one end over the other. Collecting together all the properties we want the function g(\cdot) to have, we get the following list:

  1. g spans the whole range [-1,1],
  2. g'(0) = g'(1) = 0, and
  3. g is symmetric about 1/2, g(1/2+x) = -g(1/2-x).

Mentally scrolling through our Rolodex of friendly functions, a natural one that might come to mind meeting these three criteria is the cosine function, specifically g(t) = \cos(\pi t). This function yields points which are more clustered at the endpoints:

The points

    \begin{equation*}x_j = \cos\left(\frac{j\pi}{n}\right)\end{equation*}

we guessed our way into are known as the Chebyshev points.4Some authors refer to these as the “Chebyshev points of the second kind” or use other names. We follow the convention in Approximation Theory and Approximation Practice (Chapter 1) and simply refer to these points simply as the Chebyshev points. The Chebyshev points prove themselves perfectly fine for the Runge function:

As we saw earlier, success on the Runge function alone is not enough to declare victory for the polynomial interpolation problem. However, in this case, there are no other bad examples left to find. For any nice function with no jumps, polynomial interpolation through the Chebyshev points works excellently.5Specifically, for a function f(\cdot) which not too rough (i.e., Lipschitz continuous), the degree-n polynomial interpolant of f(\cdot) through the Chevyshev points converges uniformly to f(\cdot) as n\to\infty.

Why the Chebyshev Points?

We’ve guessed our way into a solution to the polynomial interpolation problem, but we still really don’t know what’s going on here. Why are the Chebyshev points much better at polynomial interpolation than equally spaced ones?

Now that we know that the Chebyshev points are a right answer to the interpolation problem,6Indeed, there are other sets of interpolation points through which polynomial interpolation also works well, such as the Legendre points. let’s try and reverse engineer a principled reason for why we would expect them to be effective for this problem. To do this, we ask:

What is special about the cosine function?

From high school trigonometry, we know that \cos \theta gives the x coordinate of a point \theta radians along the unit circle. This means that the Chebyshev points are the x coordinates of equally spaced points on the unit circle (specifically the top half of the unit circle 0\le \theta\le \pi).

Chebyshev points are the x coordinates of equally spaced points on the unit circle.

This raises the question:

What does the interpolating polynomial p(x) look like as a function of the angle \theta?

To convert between x and \theta we simply plug in x = \cos \theta to p(x):

    \begin{equation*}p^\circ(\theta) = p(\cos \theta) = a_n \cos^n \theta + a_{n-1} \cos^{n-1} \theta + \cdots + a_0.\end{equation*}

This new function depending on \theta, which we can call p^\circ(\theta), is a polynomial in the variable \cos \theta. Powers of cosines are not something we encounter every day, so it makes sense to try and simplify things using some trig identities. Here are the first couple powers of cosines:

    \begin{gather*}\cos^2 \theta = \frac{1}{2} + \frac{1}{2} \cos (2\theta), \\\cos^3 \theta = \frac{3}{4}\cos \theta + \frac{1}{4} \cos (3\theta), \\\cos^4 \theta = \frac{3}{8}+ \frac{1}{2} \cos (2\theta) + \frac{1}{8} \cos (4\theta),\\\vdots\end{gather*}

A pattern has appeared! The powers \cos^k \theta always take the form7As a fun exercise, you might want to try and prove this using mathematical induction.

    \begin{equation*}\cos^k \theta = \textnormal{(some number)} \cdot \cos(k\theta) + \textnormal{(some number)} \cdot \cos((k-2)\theta) + \cdots .\end{equation*}

The significance of this finding is that, by plugging in each of these formulas for \cos^k \theta, we see that our polynomial p(x) in the variable x has morphed into a Fourier cosine series in the variable \theta:

    \begin{equation*}p^\circ(\theta) = b_n \cos(n\theta) + b_{n-1} \cos((n-1)\theta) + \cdots + b_1 \cos \theta + b_0.\end{equation*}

For anyone unfamiliar with Fourier series, we highly encourage the 3Blue1Brown video on the subject, which explains why Fourier series are both mathematically beautiful and practically useful. The basic idea is that almost any function can be expressed as a combination of waves (that is, sines and cosines) with different frequencies.8More precisely, we might call these angular frequencies. In our case, this formula tells us that p^\circ(\theta) is equal to b_0 units of frequency 0, plus b_1 units of frequency 1, all the way up to b_n units of frequency n. Different types of Fourier series are appropriate in different contexts. Since our Fourier series only possesses cosines, we call it a Fourier cosine series.

We’ve discovered something incredibly cool:

Polynomial interpolation through the Chebyshev points is equivalent to finding a Fourier cosine series for equally spaced angles \theta.

We’ve arrived at an answer to why the Chebyshev points work well for polynomial interpolation.

Polynomial interpolation through the Chebyshev points is effective because Fourier cosine series through equally spaced angles \theta is effective.

Of course, this explanation just raises the further question: Why do Fourier cosine series give effective interpolants through equally spaced angles \theta? This question has a natural answer as well, involving the convergence theory and aliasing formula (see Section 3 of this paper) for Fourier series. We’ll leave the details to the interested reader for investigation. The success of Fourier cosines series in interpolating equally spaced data is a fundamental observation that underlies the field of digital signal processing. Interpolation through the Chebyshev points effectively hijacks this useful fact and applies it to the seemingly unrelated problem of polynomial interpolation.

Another question this explanation raises is the precise meaning of “effective”. Just how good are polynomial interpolants through the Chebyshev points at approximating functions? As is discussed at length in another post on this blog, the degree to which a function can be effectively approximated is tied to how smooth or rough it is. Chebyshev interpolants approximate nice analytic functions like \sin(x) or 1/(1+25x^2) with exponentially small errors in the number of interpolation points used. By contrast, functions with kinks like |x| are approximated with errors which decay much more slowly. See theorems 2 and 3 on this webpage for more details.

Chebyshev Polynomials

We’ve now discovered a set of points, the Chebyshev points, through which polynomial interpolation works well. But how should we actually compute the interpolating polynomial

    \begin{equation*}p(x) = a_n x^n + a_{n-1}x^{n-1} + \cdots + a_0?\end{equation*}

Again, it will be helpful to draw on the connection to Fourier series. Computations with Fourier series are highly accurate and can be made lightning fast using the fast Fourier transform algorithm. By comparison, directly computing with a polynomial p(x) through its coefficients a_n,a_{n-1},\ldots,a_0 is a computational nightmare.

In the variable \theta, the interpolant takes the form

    \begin{equation*}p^\circ(\theta) = b_n \cos(n\theta) + b_{n-1} \cos((n-1)\theta) + \cdots + b_1 \cos \theta + b_0.\end{equation*}

To convert back to x = \cos \theta, we use the inverse function9One always has to be careful when going from x = \cos \theta to \theta = \arccos x since multiple \theta values get mapped to a single x value by the cosine function. Fortunately, we’re working with variables 0\le \theta\le \pi and -1\le x\le 1, between which the cosine function is one-to-one with the inverse function being given by the arccosine. \theta = \arccos x to obtain:

    \begin{equation*}p(x) = b_n \cos(n\arccos(x)) + \cdots + b_1 \cos(\arccos x) + b_0\end{equation*}

This is a striking formula. Given all of the trigonometric functions, it’s not even obvious that p(x) is a polynomial (it is)!

Despite its seeming peculiarity, this is a very powerful way of representing the polynomial p(x). Rather than expressing p(x) using monomials 1,x,x^2,\ldots, we’ve instead written p(x) as a combination of more exotic polynomials

    \begin{equation*}T_k(x) = \cos(k \arccos x) \quad \text{for $k=0,1,2,\ldots n$}.\end{equation*}

The polynomials T_0(x),T_1(x),T_2(x),\ldots are known as the Chebyshev polynomials,10More precisely, the polynomials T_k(x) are known as the Chebyshev polynomials of the first kind. named after Pafnuty Chebyshev who studied the polynomials intensely.11The letter “T” is used for Chebyshev polynomials since the Russian name “Chebyshev” is often alternately transliterated to English as “Tchebychev”.

Pafnuty Chebyshev

Writing out the first few Chebyshev polynomials shows they are indeed polynomials:

    \begin{gather*}T_0(x) = 1, \\T_1(x) = x, \\T_2(x) = 2x^2 - 1, \\ T_3(x) = 4x^3 - 3x, \\\vdots \end{gather*}

The first four Chebyshev polynomials

To confirm that this pattern does continue, we can use trig identities to derive12Specifically, the recurrence is a consequence of applying the sum-to-product identity to \cos((k+1)\theta) + \cos((k-1)\theta) for \theta = \arccos x. the following recurrence relation for the Chebyshev polynomials:

    \begin{equation*}T_{k+1}(x) = 2x T_k(x) - T_{k-1}(x).\end{equation*}

Since T_0(x) = 1 and T_1(x) = x are both polynomials, every Chebyshev polynomial is as well.

We’ve arrived at the following amazing conclusion:

Under the change of variables x = \cos \theta, the Fourier cosine series

    \[p^\circ(\theta) = b_n\cos(n\theta) + \cdots + b_1\cos\theta + b_0\]

becomes the combination of Chebyshev polynomials

    \[p(x) = b_nT_n(x) + \cdots + b_1 T_1(x) + b_0.\]

This simple and powerful observations allows us to apply the incredible speed and accuracy of Fourier series to polynomial interpolation.

Beyond being a neat idea with some nice mathematics, this connection between Fourier series and Chebyshev polynomials is a powerful tool for solving computational problems. Once we’ve accurately approximated a function by a polynomial interpolant, many quantities of interest (derivatives, integrals, zeros) become easy to compute—after all, we just have to compute them for a polynomial! We can also use Chebyshev polynomials to solve differential equations with much faster rates of convergence than other methods. Because of the connection to Fourier series, all of these computations can be done to high accuracy and blazingly fast via the fast Fourier transform, as is done in the software package Chebfun.

The Chebyshev polynomials have an array of amazing properties and they appear all over mathematics and its applications in other fields. Indeed, we have only scratched the surface of the surface. Many questions remain:

  • What is the connection between the Chebyshev points and the Chebyshev polynomials?
  • The cosine functions 1,\cos \theta,\cos(2\theta),\ldots are orthogonal to each other; are the Chebyshev polynomials?
  • Are the Chebyshev points the best points for polynomial interpolation? What does “best” even mean in this context?
  • Every “nice” even periodic function has an infinite Fourier cosine series which converges to it. Is there a Chebyshev analog? Is there a relation between the infinite Chebyshev series and the (finite) interpolating polynomial through the Chebyshev points?

All of these questions have beautiful and fairly simple answers. The book Approximation Theory and Approximation Practice is a wonderfully written book that answers all of these questions in its first six chapters, which are freely available on the author’s website. We recommend the book highly to the curious reader.

TL;DR: To get an accurate polynomial approximation, interpolate through the Chebyshev points.
To compute the resulting polynomial, change variables to \theta = \arccos x, compute the Fourier cosine series interpolant, and obtain your polynomial interpolant as a combination of Chebyshev polynomials.