# Neat Randomized Algorithms: RandDiag for Rapidly Diagonalizing Normal Matrices

Consider two complex-valued square matrices and . The first matrix is Hermitian, being equal to its conjugate transpose . The other matrix is non-Hermitian, . Let’s see how long it takes to compute their eigenvalue decompositions in MATLAB:

>> A = randn(1e3) + 1i*randn(1e3); A = (A+A')/2;
>> tic; [V_A,D_A] = eig(A); toc % Hermitian matrix
Elapsed time is 0.415145 seconds.
>> B = randn(1e3) + 1i*randn(1e3);
>> tic; [V_B,D_B] = eig(B); toc % non-Hermitian matrix
Elapsed time is 1.668246 seconds.

We see that it takes longer to compute the eigenvalues of the non-Hermitian matrix as compared to the Hermitian matrix . Moreover, the matrix of eigenvectors for a Hermitian matrix is a unitary matrix, .

There are another class of matrices with nice eigenvalue decompositions, normal matrices. A square, complex-valued matrix is normal if . The matrix of eigenvectors for a normal matrix is also unitary, . An important class of normal matrices are unitary matrices themselves. A unitary matrix is always normal since it satisfies .

Let’s see how long it takes MATLAB to compute the eigenvalue decomposition of a unitary (and thus normal) matrix:

>> U = V_A;                     % unitary, and thus normal, matrix
>> tic; [V_U,D_U] = eig(U); toc % normal matrix
Elapsed time is 2.201017 seconds.

Even longer than it took to compute an eigenvalue decomposition of the non-normal matrix ! Can we make the normal eigenvalue decomposition closer to the speed of the Hermitian eigenvalue decomposition?

Here is the start of an idea. Every square matrix has a Cartesian decomposition:

We have written as a combination of its Hermitian part and times its skew-Hermitian part . Both and are Hermitian matrices. The Cartesian decomposition of a square matrix is analogous to the decomposition of a complex number into its real and imaginary parts.

For a normal matrix , the Hermitian and skew-Hermitian parts commute, . We know from matrix theory that commuting Hermitian matrices are simultaneously diagonalizable, i.e., there exists such that and for diagonal matrices and . Thus, given access to such , has eigenvalue decomposition

Here’s a first attempt to turn this insight into an algorithm. First, compute the Hermitian part of , diagonalize , and then see if diagonalizes . Let’s test this out on a example:

>> C = orth(randn(2) + 1i*randn(2)); % unitary matrix
>> H = (C+C')/2;                     % Hermitian part
>> [Q,~] = eig(H);
>> Q'*C*Q                            % check to see if diagonal
ans =
-0.9933 + 0.1152i  -0.0000 + 0.0000i
0.0000 + 0.0000i  -0.3175 - 0.9483i

Yay! We’ve succeeded at diagonalizing the matrix using only a Hermitian eigenvalue decomposition. But we should be careful about declaring victory too early. Here’s a bad example:

>> C = [1 1i;1i 1]; % normal matrix
>> H = (C+C')/2;
>> [Q,~] = eig(H);
>> Q'*C*Q           % oh no! not diagonal
ans =
1.0000 + 0.0000i   0.0000 + 1.0000i
0.0000 + 1.0000i   1.0000 + 0.0000i

What’s going on here? The issue is that the Hermitian part for this matrix has a repeated eigenvalue. Thus, has multiple different valid matrices of eigenvectors. (In this specific case, every unitary matrix diagonalizes .) By looking at alone, we don’t know which matrix to pick which also diagonalizes .

He and Kressner developed a beautifully simple randomized algorithm called RandDiag to circumvent this failure mode. The idea is straightforward:

1. Form a random linear combination of the Hermitian and skew-Hermitian parts of , with standard normal random coefficients and .
2. Compute that diagonalizes .

That’s it!

To get a sense of why He and Kressner’s algorithm works, suppose that has some repeated eigenvalues and has all distinct eigenvalues. Given this setup, it seems likely that a random linear combination of and will also have all distinct eigenvalues. (It would take a very special circumstances for a random linear combination to yield two eigenvalues that are exactly the same!) Indeed, this intuition is a fact: With 100% probability, diagonalizing a Gaussian random linear combination of simultaneously diagonalizable matrices and also diagonalizes and individually.

MATLAB code for RandDiag is as follows:

function Q = rand_diag(C)
H = (C+C')/2; S = (C-C')/2i;
M = randn*H + randn*S;
[Q,~] = eig(M);
end

When applied to our hard example from before, RandDiag succeeds at giving a matrix that diagonalizes :

>> Q = rand_diag(C);
>> Q'*C*Q
ans =
1.0000 - 1.0000i  -0.0000 + 0.0000i
-0.0000 - 0.0000i   1.0000 + 1.0000i

For computing the matrix of eigenvectors for a unitary matrix, RandDiag takes 0.4 seconds, just as fast as the Hermitian eigendecomposition did.

>> tic; V_U = rand_diag(U); toc
Elapsed time is 0.437309 seconds.

He and Kressner’s algorithm is delightful. Ultimately, it uses randomness in only a small way. For most coefficients , a matrix diagonalizing will also diagonalize . But, for any specific choice of , there is a possibility of failure. To avoid this possibility, we can just pick and at random. It’s really as simple as that.

References: RandDiag was proposed in A simple, randomized algorithm for diagonalizing normal matrices by He and Kressner (2024), building on their earlier work in Randomized Joint Diagonalization of Symmetric Matrices (2022) which considers the general case of using random linear combinations to (approximately) simultaneous diagonalize (nearly) commuting matrices. RandDiag is an example of a linear algebraic algorithm that uses randomness to put the input into “general position”; see Randomized matrix computations: Themes and variations by Kireeva and Tropp (2024) for a discussion of this, and other, ways of using randomness to design matrix algorithms.

# Neat Randomized Algorithms: Randomized Cholesky QR

As a research area, randomized numerical linear algebra (RNLA) is as hot as ever. To celebrate the exciting work in this space, I’m starting a new series on my blog where I celebrate cool recent algorithms in the area. At some future point, I might talk about my own work in this series, but for now I’m hoping to use this series to highlight some of the awesome work being done by my colleagues.

Given a tall matrix with , its (economy-size) QR factorization is a decomposition of the form , where is a matrix with orthonormal columns and is upper triangular. QR factorizations are used to solve least-squares problems and as a computational procedure to orthonormalize the columns of a matrix.

Here’s an example in MATLAB, where we use QR factorization to orthonormalize the columns of a test matrix. It takes about 2.5 seconds to run.

>> A = randn(1e6, 1e2) * randn(1e2) * randn(1e2); % test matrix
>> tic; [Q,R] = qr(A,"econ"); toc
Elapsed time is 2.647317 seconds.

The classical algorithm for computing a QR factorization uses Householder reflectors and is exceptionally numerically stable. Since has orthonormal columns, is the identity matrix. Indeed, this relation holds up to a tiny error for the computed by Householder QR:

>> norm(Q'*Q - eye(1e2)) % || Q^T Q - I ||
ans =
7.0396e-14

The relative error is also small:

>> norm(A - Q*R) / norm(A)
ans =
4.8981e-14

Here is an alternate procedure for computing a QR factorization, known as Cholesky QR:

function [Q,R] = cholesky_qr(A)
R = chol(A'*A);
Q = A / R;       % Q = A * R^{-1}
end

This algorithm works by forming , computing its (upper triangular) Cholesky decomposition , and setting . Cholesky QR is very fast, about faster than Householder QR for this example:

>> tic; [Q,R] = cholesky_qr(A); toc
Elapsed time is 0.536694 seconds.

Unfortunately, Cholesky QR is much less accurate and numerically stable than Householder QR. Here, for instance, is the value of , about ten million times larger than for Householder QR!:

>> norm(Q'*Q - eye(1e2))
ans =
7.5929e-07

What’s going on? As we’ve discussed before on this blog, forming is typically problematic in linear algebraic computations. The “badness” of a matrix is measured by its condition number, defined to be the ratio of its largest and smallest singular values . The condition number of is the square of the condition number of , , which is at the root of Cholesky QR’s loss of accuracy. Thus, Cholesky QR is only appropriate for matrices that are well-conditioned, having a small condition number , say .

The idea of randomized Cholesky QR is to use randomness to precondition , producing a matrix that is well-conditioned. Then, since is well-conditioned, we can apply ordinary Cholesky QR to it without issue. Here are the steps:

1. Draw a sketching matrix of size ; see these posts of mine for an introduction to sketching.
2. Form the sketch . This step compresses the very tall matrix to the much shorter matrix of size .
3. Compute a QR factorization using Householder QR. Since the matrix is small, this factorization will be quick to compute.
4. Form the preconditioned matrix .
5. Apply Cholesky QR to to compute .
6. Set . Observe that , as desired.

MATLAB code for randomized Cholesky QR is provided below:1Code for the sparsesign subroutine can be found here.

function [Q,R] = rand_cholesky_qr(A)
S = sparsesign(2*size(A,2),size(A,1),8); % sparse sign embedding
R1 = qr(S*A,"econ"); % sketch and (Householder) QR factorize
B = A / R1; % B = A * R_1^{-1}
[Q,R2] = cholesky_qr(B);
R = R2*R1;
end

Randomized Cholesky QR is still faster than ordinary Householder QR, about faster in our experiment:

>> tic; [Q,R] = rand_cholesky_qr(A); toc
Elapsed time is 0.920787 seconds.

Randomized Cholesky QR greatly improves on ordinary Cholesky QR in terms of accuracy and numerical stability. In fact, the size of is even smaller than for Householder QR!

>> norm(Q'*Q - eye(1e2))
ans =
1.0926e-14

The relative error is small, too! Even smaller than for Householder QR in fact:

>> norm(A - Q*R) / norm(A)
ans =
4.0007e-16

Like many great ideas, randomized Cholesky QR was developed independently by a number of research groups. A version of this algorithm was first introduced in 2021 by Fan, Guo, and Lin. Similar algorithms were investigated in 2022 and 2023 by Balabanov, Higgins et al., and Melnichenko et al. Check out Melnichenko et al.‘s paper in particular, which shows very impressive results for using randomized Cholesky QR to compute column pivoted QR factorizations.

References: Primary references are A Novel Randomized XR-Based Preconditioned CholeskyQR Algorithm by Fan, Guo, and Lin (2021); Randomized Cholesky QR factorizations by Balabanov (2022); Analysis of Randomized Householder-Cholesky QR Factorization with Multisketching by Higgins et al. (2023); CholeskyQR with Randomization and Pivoting for Tall Matrices (CQRRPT) by Melnichenko et al. (2023). The idea of using sketching to precondition tall matrices originates in the paper A fast randomized algorithm for overdetermined linear least-squares regression by Rokhlin and Tygert (2008).

# Note to Self: Gaussian Hypercontractivity

The purpose of this note is to describe the Gaussian hypercontractivity inequality. As an application, we’ll obtain a weaker version of the Hanson–Wright inequality.

## The Noise Operator

We begin our discussion with the following question:

Let be a function. What happens to , on average, if we perturb its inputs by a small amount of Gaussian noise?

Let’s be more specific about our noise model. Let be an input to the function and fix a parameter (think of as close to 1). We’ll define the noise corruption of to be

(1)

Here, is the standard multivariate Gaussian distribution. In our definition of , we both add Gaussian noise and shrink the vector by a factor . In particular, we highlight two extreme cases:

• No noise. If , then there is no noise and .
• All noise. If , then there is all noise and . The influence of the original vector has been washed away completely.

The noise corruption (1) immediately gives rise to the noise operator1The noise operator is often called the Hermite operator. The noise operator is related to the Ornstein–Uhlenbeck semigroup operator by a change of variables, . . Let be a function. The noise operator is defined to be:

(2)

The noise operator computes the average value of when evaluated at the noisy input . Observe that the noise operator maps a function to another function . Going forward, we will write to denote .

To understand how the noise operator acts on a function , we can write the expectation in the definition (2) as an integral:

Here, denotes the (Euclidean) length of . We see that is the convolution of with a Gaussian density. Thus, acts to smooth the function .

See below for an illustration. The red solid curve is a function , and the blue dashed curve is .

As we decrease from to , the function is smoothed more and more. When we finally reach , has been smoothed all the way into a constant.

## Random Inputs

The noise operator converts a function to another function . We can evaluate these two functions at a Gaussian random vector , resulting in two random variables and .

We can think of as a modification of the random variable where “a fraction of the variance of has been averaged out”. We again highlight the two extreme cases:

• No noise. If , . None of the variance of has been averaged out.
• All noise. If , is a constant random variable. All of the variance of has been averaged out.

Just as decreasing smoothes the function until it reaches a constant function at , decreasing makes the random variable more and more “well-behaved” until it becomes a constant random variable at . This “well-behavingness” property of the noise operator is made precise by the Gaussian hypercontractivity theorem.

## Moments and Tails

In order to describe the “well-behavingness” properties of the noise operator, we must answer the question:

How can we measure how well-behaved a random variable is?

There are many answers to this question. For this post, we will quantify the well-behavedness of a random variable by using the norm.2Using norms is a common way of measuring the niceness of a function or random variable in applied math. For instance, we can use Sobolev norms or reproducing kernel Hilbert space norms to measure the smoothness of a function in approximation theory, as I’ve discussed before on this blog.

The norm of a (-valued) random variable is defined to be

(3)

The th power of the norm is sometimes known as the th absolute moment of .

The norms of random variables control the tails of a random variable—that is, the probability that a random variable is large in magnitude. A random variables with small tails is typically thought of as a “nice” or “well-behaved” random variable. Random quantities with small tails are usually desirable in applications, as they are more predictable—unlikely to take large values.

The connection between tails and norms can be derived as follows. First, write the tail probability for using th powers:

Then, we apply Markov’s inequality, obtaining

(4)

We conclude that a random variable with finite norm (i.e., ) has tails that decay at at a rate or faster.

## Gaussian Contractivity

Before we introduce the Gaussian hypercontractivity theorem, let’s establish a weaker property of the noise operator, contractivity.

Proposition 1 (Gaussian contractivity). Choose a noise level and a power , and let be a Gaussian random vector. Then contracts the norm of :

This result shows that the noise operator makes the random variable no less nice than was.

Gaussian contractivity is easy to prove. Begin using the definition of the noise operator (2) and norm (3):

Now, we can apply Jensen’s inequality to the convex function , obtaining

Finally, realize that for the independent normal random vectors, we have

Thus, has the same distribution as . Thus, using in place of , we obtain

Gaussian contractivity (Proposition 1) is proven.

## Gaussian Hypercontractivity

The Gaussian contractivity theorem shows that is no less well-behaved than is. In fact, is more well-behaved than is. This is the content of the Gaussian hypercontractivity theorem:

Theorem 2 (Gaussian hypercontractivity): Choose a noise level and a power , and let be a Gaussian random vector. Then

In particular, for ,

We have highlighted the case because it is the most useful in practice.

This result shows that as we take smaller, the random variable becomes more and more well-behaved, with tails decreasing at a rate

The rate of tail decrease becomes faster and faster as becomes closer to zero.

We will prove the Gaussian hypercontractivity at the bottom of this post. For now, we will focus on applying this result.

## Multilinear Polynomials

A multilinear polynomial is a multivariate polynomial in the variables in which none of the variables is raised to a power higher than one. So,

(5)

is multilinear, but

is not multilinear (since is squared).

For multilinear polynomials, we have the following very powerful corollary of Gaussian hypercontractivity:

Corollary 3 (Absolute moments of a multilinear polynomial of Gaussians). Let be a multilinear polynomial of degree . (That is, at most variables occur in any monomial of .) Then, for a Gaussian random vector and for all ,

Let’s prove this corollary. The first observation is that the noise operator has a particularly convenient form when applied to a multilinear polynomial. Let’s test it out on our example (5) from above. For

we have

We see that the expectation applies to each variable separately, resulting in each replaced by . This trend holds in general:

Proposition 4 (noise operator on multilinear polynomials). For any multilinear polynomial , .

We can use Proposition 4 to obtain bounds on the norms of multilinear polynomials of a Gaussian random variable. Indeed, observe that

Thus, by Gaussian hypercontractivity, we have

The final step of our argument will be to compute . Write as

Since is multilinear, for . Since is degree-, . The multilinear monomials are orthonormal with respect to the inner product:

(See if you can see why!) Thus, by the Pythagorean theorem, we have

Similarly, the coefficients of are . Thus,

Thus, putting all of the ingredients together, we have

Setting (equivalently ), Corollary 3 follows.

## Hanson–Wright Inequality

To see the power of the machinery we have developed, let’s prove a version of the Hanson–Wright inequality.

Theorem 5 (suboptimal Hanson–Wright). Let be a symmetric matrix with zero on its diagonal and be a Gaussian random vector. Then

Hanson–Wright has all sorts of applications in computational mathematics and data science. One direct application is to obtain probabilistic error bounds for the error incurred by a stochastic trace estimation formulas.

This version of Hanson–Wright is not perfect. In particular, it does not capture the Bernstein-type tail behavior of the classical Hanson–Wright inequality

But our suboptimal Hanson–Wright inequality is still pretty good, and it requires essentially no work to prove using the hypercontractivity machinery. The hypercontractivity technique also generalizes to settings where some of the proofs of Hanson–Wright fail, such as multilinear polynomials of degree higher than two.

Let’s prove our suboptimal Hanson–Wright inequality. Set . Since has zero on its diagonal, is a multilinear polynomial of degree two in the entries of . The random variable is mean-zero, and a short calculation shows its norm is

Thus, by Corollary 3,

(6)

In fact, since the norms are monotone, (6) holds for as well. Therefore, the standard tail bound for norms (4) gives

(7)

Now, we must optimize the value of to obtain the sharpest possible bound. To make this optimization more convenient, introduce a parameter

In terms of the parameter, the bound (7) reads

The tail bound is minimized by taking , yielding the claimed result

## Proof of Gaussian Hypercontractivity

Let’s prove the Gaussian hypercontractivity theorem. For simplicity, we will stick with the case, but the higher-dimensional generalizations follow along similar lines. The key ingredient will be the Gaussian Jensen inequality, which made a prominent appearance in a previous blog post of mine. Here, we will only need the following version:

Theorem 6 (Gaussian Jensen). Let be a twice differentiable function and let be jointly Gaussian random variables with covariance matrix . Then

(8)

holds for all test functions if, and only if,

(9)

Here, denotes the entrywise product of matrices and is the Hessian matrix of the function .

To me, this proof of Gaussian hypercontractivity using Gaussian Jensen (adapted from Paata Ivanishvili‘s excellent post) is amazing. First, we reformulate the Gaussian hypercontractivity property a couple of times using some functional analysis tricks. Then we do a short calculation, invoke Gaussian Jensen, and the theorem is proved, almost as if by magic.

### Part 1: Tricks

Let’s begin with “tricks” part of the argument.

Trick 1. To prove Gaussian hypercontractivity holds for all functions , it is sufficient to prove for all nonnegative functions .

Indeed, suppose Gaussian hypercontractivity holds for all nonnegative functions . Then, for any function , apply Jensen’s inequality to conclude

Thus, assuming hypercontractivity holds for the nonnegative function , we have

Thus, the conclusion of the hypercontractivity theorem holds for as well, and the Trick 1 is proven.

Trick 2. To prove Gaussian hypercontractivity for all , it is sufficient to prove the following “bilinearized” Gaussian hypercontractivity result:

holds for all with . Here, is the Hölder conjugate to .

Indeed, this follows3This argument may be more clear to parse if we view and as functions on equipped with the standard Gaussian measure . This result is just duality for the norm. from the dual characterization of the norm of :

Trick 2 is proven.

Trick 3. Let be a pair of standard Gaussian random variables with correlation . Then the bilinearized Gaussian hypercontractivity statement is equivalent to

Indeed, define for the random variable in the definition of the noise operator . The random variable is standard Gaussian and has correlation with , concluding the proof of Trick 3.

Finally, we apply a change of variables as our last trick:

Trick 4. Make the change of variables and , yielding the final equivalent version of Gaussian hypercontractivity:

for all functions and (in the appropriate spaces).

### Part 2: Calculation

We recognize this fourth equivalent version of Gaussian hypercontractivity as the conclusion (8) to Gaussian Jensen with

. Thus, to prove Gaussian hypercontractivity, we just need to check the hypothesis (9) of the Gaussian Jensen inequality (Theorem 6).

We now enter the calculation part of the proof. First, we compute the Hessian of :

We have written for the Hölder conjugate to . By Gaussian Jensen, to prove Gaussian hypercontractivity, it suffices to show that

is negative semidefinite for all . There are a few ways we can make our lives easier. Write this matrix as

Scaling by nonnegative and conjugation both preserve negative semidefiniteness, so it is sufficient to prove

Since the diagonal entries of are negative, at least one of ‘s eigenvalues is negative.4Indeed, by the Rayleigh–Ritz variational principle, the smallest eigenvalue of a symmetric matrix is Taking for to be each of the standard basis vectors, shows that the smallest eigenvalue of is smaller than the smallest diagonal entry of . Therefore, to prove is negative semidefinite, we can prove that its determinant (= product of its eigenvalues) is nonnegative. We compute

Now, just plug in the values for , , :

Thus, . We conclude is negative semidefinite, proving the Gaussian hypercontractivity theorem.

# Don’t Use Gaussians in Stochastic Trace Estimation

Suppose we are interested in estimating the trace of an matrix that can be only accessed through matrix–vector products . The classical method for this purpose is the GirardHutchinson estimator

where the vectors are independent, identically distributed (iid) random vectors satisfying the isotropy condition

Examples of vectors satisfying this condition include

Stochastic trace estimation has a number of applications: log-determinant computations in machine learningpartition function calculations in statistical physicsgeneralized cross validation for smoothing splines, and triangle counting in large networks. Several improvements to the basic Girard–Hutchinson estimator have been developed recently. I am partial to XTrace, an improved trace estimator that I developed with my collaborators.

This post is addressed at the question:

Which distribution should be used for the test vectors for stochastic trace estimation?

Since the Girard–Hutchinson estimator is unbiased , the variance of is equal to the mean-square error. Thus, the lowest variance trace estimate is the most accurate. In my previous post on trace estimation, I discussed formulas for the variance of the Girard–Hutchinson estimator with different choices of test vectors. In that post, I stated the formulas for different choices of test vectors (Gaussian, random signs, sphere) and showed how those formulas could be proven.

In this post, I will take the opportunity to editorialize on which distribution to pick. The thesis of this post is as follows:

The sphere distribution is essentially always preferable to the Gaussian distribution for trace estimation.

To explain why, let’s focus on the case when is real and symmetric.1The same principles hold in the general case, but the variance formulas are more delicate to state. See my previous post for the formulas. Let be the eigenvalues of and define the eigenvalue mean

Then the variance of the Girard–Hutchinson estimator with Gaussian vectors is

For vectors drawn from the sphere, we have

The sphere distribution improves on the Gaussian distribution in two ways. First, the variance of is smaller than by a factor of . This improvement is quite minor. Second, and more importantly, is proportional to the sum of ‘s squared eigenvalues whereas is proportional to the sum of ‘s squared eigenvalues after having been shifted to be mean-zero!

The difference between Gaussian and sphere test vectors can be large. To see this, consider a matrix with eigenvalues uniformly distributed between and with a (Haar orthgonal) random matrix of eigenvectors. For simplicity, since the variance of all Girard–Hutchinson estimates is proportional to , we take . Below show the variance of Girard–Hutchinson estimator for different distributions for the test vector. We see that the sphere distribution leads to a trace estimate which has a variance 300× smaller than the Gaussian distribution. For this example, the sphere and random sign distributions are similar.

## Which Distribution Should You Use: Signs vs. Sphere

The main point of this post is to argue against using the Gaussian distribution. But which distribution should you use: Random signs? The sphere distribution? The answer, for most applications, is one of those two, but exactly which depends on the properties of the matrix .

The variance of the Girard–Hutchinson estimator with the random signs estimator is

Thus, depends on the size of the off-diagonal entries of ; does not depend on the diagonal of at all! For matrices with small off-diagonal entries (such as diagonally dominant matrices), the random signs distribution is often the best.

However, for other problems, the sphere distribution is preferable to random signs. The sphere distribution is rotation-invariant, so is independent of the eigenvectors of the (symmetric) matrix , depending only on ‘s eigenvalues. By contrast, the variance of the Girard–Hutchinson estimator with the random signs distribution can significantly depend on the eigenvectors of the matrix . For a given set of eigenvalues and the worst-case choice of eigenvectors, will always be smaller than . In fact, is the minimum variance distribution for Girard–Hutchinson trace estimation for a matrix with fixed eigenvalues and worst-case eigenvectors; see this section of my previous post for details.

In my experience, random signs and the sphere distribution are both perfectly adequate for trace estimation and either is a sensible default if you’re developing software. The Gaussian distribution on the other hand… don’t use it unless you have a good reason to.

# How Good Can Stochastic Trace Estimates Be?

I am excited to share that our paper XTrace: Making the most of every sample in stochastic trace estimation has been published in the SIAM Journal on Matrix Analysis and Applications. (See also our paper on arXiv.)

Spurred by this exciting news, I wanted to take the opportunity to share one of my favorite results in randomized numerical linear algebra: a “speed limit” result of Meyer, Musco, Musco, and Woodruff that establishes a fundamental limitation on how accurate any trace estimation algorithm can be.

Let’s back up. Given an unknown square matrix , the trace of , defined to be the sum of its diagonal entries

The catch? We assume that we can only access the matrix through matrix–vector products (affectionately known as “matvecs”): Given any vector , we have access to . Our goal is to form an estimate that is as accurate as possible while using as few matvecs as we can get away with.

To simplify things, let’s assume the matrix is symmetric and positive (semi)definite. The classical algorithm for trace estimation is due to Girard and Hutchinson, producing a probabilistic estimate with a small average (relative) error:

If one wants high accuracy, this algorithm is expensive. To achieve just a 1% error () requires roughly matvecs!

This state of affairs was greatly improved by Meyer, Musco, Musco, and Woodruff. Building upon previous work, they proposed the Hutch++ algorithm and proved it outputs an estimate satisfying the following bound:

(1)

Now, we only require roughly matvecs to achieve 1% error! Our algorithm, XTrace, satisfies the same error guarantee (1) as Hutch++. On certain problems, XTrace can be quite a bit more accurate than Hutch++.

## The MMMW Trace Estimation “Speed Limit”

Given the dramatic improvement of Hutch++ and XTrace over Girard–Hutchinson, it is natural to hope: Is there an algorithm that does even better than Hutch++ and XTrace? For instance, is there an algorithm satisfying an even slightly better error bound of the form

Unfortunately not. Hutch++ and XTrace are essentially as good as it gets.

Let’s add some fine print. Consider an algorithm for the trace estimation problem. Whenever the algorithm wants, it can present a vector and receive back . The algorithm is allowed to be adaptive: It can use the matvecs it has already collected to decide which vector to present next. We measure the cost of the algorithm in terms of the number of matvecs alone, and the algorithm knows nothing about the psd matrix other what it learns from matvecs.

One final stipulation:

Simple entries assumption. We assume that the entries of the vectors presented by the algorithm are real numbers between and with up to digits after the decimal place.

To get a feel for this simple entries assumption, suppose we set . Then would be an allowed input vector, but would not be (too many digits after the decimal place). Similarly, would not be valid because its entries exceed . The simple entries assumption is reasonable as we typically represent numbers on digital computers by storing a fixed number of digits of accuracy.1We typically represent numbers on digital computers by floating point numbers, which essentially represent numbers using scientific notation like . For this analysis of trace estimation, we use fixed point numbers like (no powers of ten allowed)!

With all these stipulations, we are ready to state the “speed limit” for trace estimation proved by Meyer, Musco, Musco, and Woodruff:

Informal theorem (Meyer, Musco, Musco, Woodruff). Under the assumptions above, there is no trace estimation algorithm producing an estimate satisfying

We will see a slightly sharper version of the theorem below, but this statement captures the essence of the result.

## Communication Complexity

To prove the MMMW theorem, we have to take a journey to the beautiful subject of communication complexity. The story is this. Alice and Bob are interested in solving a computational problem together. Alice has her input and Bob has his input , and they are interested in computing a function of both their inputs.

Unfortunately for the two of them, Alice and Bob are separated by a great distance, and can only communicate by sending single bits (0 or 1) of information over a slow network connection. Every bit of communication is costly. The field of communication complexity is dedicated to determining how efficiently Alice and Bob are able to solve problems of this form.

The Gap-Hamming problem is one example of a problem studied in communication complexity. As inputs, Alice and Bob receive vectors with and entries from a third party Eve. Eve promises Alice and Bob that their vectors and satisfy one of two conditions:

(2)

Alice and Bob must work together, sending as few bits of communication as possible, to determine which case they are in.

There’s one simple solution to this problem: First, Bob sends his whole input vector to Alice. Each entry of takes one of the two value and can therefore be communicated in a single bit. Having received , Alice computes , determines whether they are in case 0 or case 1, and sends Bob a single bit to communicate the answer. This procedure requires bits of communication.

Can Alice and Bob still solve this problem with many fewer than bits of communication, say bits? Unfortunately not. The following theorem of Chakrabati and Regev shows that roughly bits of communication are needed to solve this problem:

Theorem (Chakrabati–Regev). Any algorithm which solves the Gap-Hamming problem that succeeds with at least probability for every pair of inputs and (satisfying one of the conditions (2)) must take bits of communication.

Here, is big-Omega notation, closely related to big-O notation and big-Theta notation . For the less familiar, it can be helpful to interpret , , and as all standing for “proportional to ”. In plain language, the theorem of Chakrabati and Regev result states that there is no algorithm for the Gap-Hamming problem that much more effective than the basic algorithm where Bob sends his whole input to Alice (in the sense of requiring less than bits of communication).

## Reducing Gap-Hamming to Trace Estimation

This whole state of affairs is very sad for Alice and Bob, but what does it have to do with trace estimation? Remarkably, we can use hardness of the Gap-Hamming problem to show there’s no algorithm that fundamentally improves on Hutch++ and XTrace. The argument goes something like this:

1. If there were a trace estimation algorithm fundamentally better than Hutch++ and XTrace, we could use it to solve Gap-Hamming in fewer than bits of communication.
2. But no algorithm can solve Gap-Hamming in fewer than bits or communication.
3. Therefore, no trace estimation algorithm is fundamentally better than Hutch++ and XTrace.

Step 2 is the work of Chakrabati and Regev, and step 3 follows logically from 1 and 2. Therefore, we are left to complete step 1 of the argument.

### Protocol

Assume we have access to a really good trace estimation algorithm. We will use it to solve the Gap-Hamming problem. For simplicity, assume is a perfect square. The basic idea is this:

• Have Alice and Bob reshape their inputs into matrices , and consider (but do not form!) the positive semidefinite matrix

• Observe that

Thus, the two cases in (2) can be equivalently written in terms of :

(2′)

• By working together, Alice and Bob can implement a trace estimation algorithm. Alice will be in charge of running the algorithm, but Alice and Bob must work together to compute matvecs. (Details below!)
• Using the output of the trace estimation algorithm, Alice determines whether they are in case 0 or 1 (i.e., where or ) and sends the result to Bob.

To complete this procedure, we just need to show how Alice and Bob can implement the matvec procedure using minimal communication. Suppose Alice and Bob want to compute for some vector with entries between and with up to decimal digits. First, convert to a vector whose entries are integers between and . Since , interconverting between and is trivial. Alice and Bob’s procedure for computing is as follows:

• Alice sends Bob .
• Having received , Bob forms and sends it to Alice.
• Having received , Alice computes and sends it to Bob.
• Having received , Bob computes and sends its to Alice.
• Alice forms .

Because and are and have entries, all vectors computed in this procedure are vectors of length with integer entries between and . We conclude the communication cost for one matvec is bits.

### Analysis

Consider an algorithm we’ll call BestTraceAlgorithm. Given any accuracy parameter , BestTraceAlgorithm requires at most matvecs and, for any positive semidefinite input matrix of any size, produces an estimate satisfying

(3)

We assume that BestTraceAlgorithm is the best possible algorithm in the sense that no algorithm can achieve (3) on all (positive semidefinite) inputs with matvecs.

To solve the Gap-Hamming problem, Alice and Bob just need enough accuracy in their trace estimation to distinguish between cases 0 and 1. In particular, if

then Alice and Bob can distinguish between cases 0 and 1 in (2′)

Suppose that Alice and Bob apply trace estimation to solve the Gap-Hamming problem, using matvecs in total. The total communication is bits. Chakrabati and Regev showed that Gap-Hamming requires bits of communication (for some ) to solve the Gap-Hamming problem with probability. Thus, if , then Alice and Bob fail to solve the Gap-Hamming problem with at least probability. Thus,

The contrapositive of this statement is that if

Say Alice and Bob run BestTraceAlgorithm with parameter . Then, by (3) and Markov’s inequality,

Therefore, BestTraceAlgorithm requires at least

Using the fact that we’ve set , we conclude that any trace estimation algorithm, even BestTraceAlgorithm, requires

In particular, no trace estimation algorithm can achieve mean relative error using even matvecs. This proves the MMMW theorem.

# Five Interpretations of Kernel Quadrature

I’m excited to share that my paper Kernel quadrature with randomly pivoted Cholesky, joint with Elvira Moreno, has been accepted to NeurIPS 2023 as a spotlight.

Today, I want to share with you a little about the kernel quadrature problem. To avoid this post getting too long, I’m going to write this post assuming familiarity with the concepts of reproducing kernel Hilbert spaces and Gaussian processes.

Integration is one of the most widely used operations in mathematics and its applications. As such, it is a basic problem of wide interest to develop numerical methods for evaluating integrals.

In this post, we will consider a quite general integration problem. Let be a domain and let be a (finite Borel) measure on . We consider the task of evaluating

One can imagine that , , and are fixed, but we may want to evaluate this same integral for multiple different functions .

To evaluate, we will design a quadrature approximation to the integral :

Concretely, we wish to find real numbers and points such that the approximation is accurate.

## Smoothness and Reproducing Kernel Hilbert Spaces

As is frequently the case in computational mathematics, the accuracy we can expect for this integration problem depends on the smoothness of the integrand . The more smooth is, the more accurately we can expect to compute for a given budget of computational effort.

In this post, will measure smoothness using the reproducing kernel Hilbert space (RKHS) formalism. Let be an RKHS with norm . We can interpret the norm as assigning a roughness to each function . If is large, then is rough; if is small, then is smooth.

Associated to the RKHS is the titular reproducing kernel . The kernel is a bivariate function . It is related to the RKHS inner product by the reproducing property

Here, represents the univariate function obtained by setting the first input of to be .

## The Ideal Weights

To design a quadrature rule, we have to set the nodes and weights . Let’s first assume that the nodes are fixed, and talk about how to pick the weights .

There’s one choice of weights that we’ll called the ideal weights. There (at least) are five equivalent ways of characterizing the ideal weights. We’ll present all of them. As an exercise, you can try and convince yourself that these characterizations are equivalent, giving rise to the same weights.

### Interpretation 1: Exactness

A standard way of designing quadrature rules is to make them exact (i.e., error-free) for some class of functions. For instance, many classical quadrature rules are exact for polynomials of degree up to .

For kernel quadrature, it makes sense to design the quadrature rule to be exact for the kernel function at the selected nodes. That is, we require

Enforcing exactness gives us linear equations for the unknowns :

Under mild conditions, this system of linear equations is uniquely solvable, and the solution is the ideal weights.

### Interpretation 2: Interpolate and Integrate

Here’s another very classical way of designing a quadrature rule. First, interpolate the function values at the nodes, obtaining an interpolant . Then, obtain an approximation to the integral by integrating the interpolant:

In our context, the appropriate interpolation method is kernel interpolation.1Kernel interpolation is also called Gaussian process regression or kriging though (confusingly) these terms can also refer to slightly different methods. It is the regularization-free limit of kernel ridge regression. The kernel interpolant is defined to be the minimum-norm function that interpolates the data:

Remarkably, this infinite-dimensional problem has a tractably computable solution. In fact, is the unique function of the form

that agrees with on the points .With a little algebra, you can show that the integral of is

where are the ideal weights.

### Interpretation 3: Minimizing the Worst-Case Error

Define the worst-case error of weights and nodes to be

The quantity is the highest possible quadrature error for a function of norm at most 1.

Having defined the worst-case error, the ideal weights are precisely the weights that minimize this quantity

### Interpretation 4: Minimizing the Mean-Square Error

The next two interpretations of the ideal weights will adopt a probabilistic framing. A Gaussian process is a random function such that ’s values at any collection of points are (jointly) Gaussian random variables. We write for a mean-zero Gaussian process with covariance function :

Define the mean-square quadrature error of weights and nodes to be

The mean-square error reports the expected squared quadrature error over all functions drawn from a Gaussian process with covariance function .

Pleasantly, the mean-square error is equal ro the square of the worst-case error

As such, the ideal weights also minimize the mean-square error

### Interpretation 5: Conditional Expectation

For our last interpretation, again consider a Gaussian process . The integral of this random function is a random variable. To numerically integrate a function , compute the expectation of conditional on agreeing with at the quadrature nodes:

One can show that this procedure yields the quadrature scheme with the ideal weights.

### Conclusion

We’ve just seen five sensible ways of defining the ideal weights for quadrature in a general reproducing kernel Hilbert space. Remarkably, all five lead to exactly the same choice of weights. To me, these five equivalent characterizations give me more confidence that the ideal weights really are the “right” or “natural” choice for kernel quadrature.

That said, there are other reasonable requirements that we might want to impose on the weights. For instance, if is a probability measure and , it is reasonable to add an additional constraint that the weights lie in the probability simplex

With this additional stipulation, a quadrature rule can be interpreted as integrating against a discrete probability measure ; thus, in effect, quadrature amounts to approximating one probability measure by another . Additional constraints such as these can easily be imposed when using the optimization characterizations 3 and 4 of the ideal weights. See this paper for details.

We’ve spent a lot of time talking about how to pick the quadrature weights, but how should we pick the nodes ? To pick the nodes, it seems sensible to try and minimize the worst-case error with the ideal weights . For this purpose, we can use the following formula:

Here, is the Nyström approximation to the kernel induced by the nodes , defined to be

We have written for the kernel matrix with entry and and for the row and column vectors with th entry and .

I find the appearance of the Nyström approximation in this context to be surprising and delightful. Previously on this blog, we’ve seen (column) Nyström approximation in the context of matrix low-rank approximation. Now, a continuum analog of the matrix Nyström approximation has appeared in the error formula for numerical integration.

The appearance of the Nyström approximation in the kernel quadrature error also suggests a strategy for picking the nodes.

Node selection strategy. We should pick the nodes to make the Nyström approximation as accurate as possible.

The closer is to , the smaller the function is and, thus, the smaller the error

Fortunately, we have randomized matrix algorithms for picking good nodes for matrix Nyström approximation such as randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling; maybe these matrix tools can be ported to the continuous kernel setting?

Indeed, all three of these algorithms—randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling—have been studied for kernel quadrature. The first of these algorithms, randomly pivoted Cholesky, is the subject of our paper. We show that this simple, adaptive sampling method produces excellent nodes for kernel quadrature. Intuitively, randomly pivoted Cholesky is effective because it is repulsive: After having picked nodes , it has a high probability of placing the next node far from the previously selected nodes.

The following image shows 20 nodes selected by randomly pivoted Cholesky in a crescent-shaped region. The cyan–pink shading denotes the probability distribution for picking the next node. We see that the center of the crescent does not have any nodes, and thus is most likely to receive a node during the next round of sampling.

In our paper, we demonstrate—empirically and theoretically—that randomly pivoted Cholesky produces excellent nodes for quadrature. We also discuss efficient rejection sampling algorithms for sampling nodes with the randomly pivoted Cholesky distribution. Check out the paper for details!

# Which Sketch Should I Use?

This is the second of a sequence of two posts on sketching, which I’m doing on the occasion of my new paper on the numerical stability of the iterative sketching method. For more on what sketching is and how it can be used to solve computational problems, I encourage you to check out the first post.

The goals of this post are more narrow. I seek to answer the question:

Which sketching matrix should I use?

To cut to the chase, my answer to this question is:

Sparse sign embeddings are a sensible default for sketching.

There are certainly cases when sparse sign embeddings are not the best type of sketch to use, but I hope to convince you that they’re a good sketching matrix to use for most purposes.

## Experiments

Let’s start things off with some numerical experiments.1Code for all numerical experiments can be found on the blogpost branch of the Github for my recent paper. We’ll compare three types of sketching matrices: Gaussian embeddings, a subsampled randomized trigonometric transform (SRTT), and sparse sign embeddings. See the last post for descriptions of these sketching matrices. I’ll discuss a few additional types of sketching matrices that require more discussion at the end of this post.

Recall that a sketching matrix seeks to compress a high-dimensional matrix or vector to a lower-dimensional sketched matrix or vector . The quality of a sketching matrix for a matrix is measured by its distortion , defined to be the smallest number such that

Here, denotes the column space of the matrix .

### Timing

We begin with timing test. We will measure three different times for each embedding:

1. Construction. The time required to generate the sketching matrix .
2. Vector apply. The time to apply the sketch to a single vector.
3. Matrix apply. The time to apply the sketch to an matrix.

We will test with input dimension (one million) and output dimension . For the SRTT, we use the discrete cosine transform as our trigonometric transform. For the sparse sign embedding, we use a sparsity parameter .

Here are the results (timings averaged over 20 trials):

Our conclusions are as follows:

• Sparse sign embeddings are definitively the fastest to apply, being 3–20× faster than the SRTT and 74–100× faster than Gaussian embeddings.
• Sparse sign embeddings are modestly slower to construct than the SRTT, but much faster to construct than Gaussian embeddings.

Overall, the conclusion is that sparse sign embeddings are the fastest sketching matrices by a wide margin: For an “end-to-end” workflow involving generating the sketching matrix and applying it to a matrix , sparse sign embeddings are 14× faster than SRTTs and 73× faster than Gaussian embeddings.2More timings are reported in Table 1 of this paper, which I credit for inspiring my enthusiasm for the sparse sign embedding l.

### Distortion

Runtime is only one measure of the quality of a sketching matrix; we also must care about the distortion . Fortunately, for practical purposes, Gaussian embeddings, SRTTs, and sparse sign embeddings all tend to have similar distortions. Therefore, we are free to use the sparse sign embeddings, as they as typically are the fastest.

Consider the following test. We generate a sparse random test matrix of size for and using the MATLAB sprand function; we set the sparsity level to 1%. We then compare the distortion of Gaussian embeddings, SRTTs, and sparse sign embeddings across a range of sketching dimensions between 100 and 10,000. We report the distortion averaged over 100 trials. The theoretically predicted value (equivalently, ) is shown as a dashed line.

To me, I find these results remarkable. All three embeddings exhibit essentially the same distortion parameter predicted by the Gaussian theory.

It would be premature to declare success having only tested on one type of test matrix . Consider the following four test matrices:

• Sparse: The test matrix from above.
• Dense: is taken to be a matrix with independent standard Gaussian random values.
• Khatri–Rao: is taken to be the Khatri–Rao product of three Haar random orthogonal matrices.
• Identity: is taken to be the identity matrix stacked onto a matrix of zeros.

The performance of sparse sign embeddings (again with sparsity parameter ) is shown below:

We see that for the first three test matrices, the performance closely follows the expected value . However, for the last test matrix “Identity”, we see the distortion begins to slightly exceed this predicted distortion for .

To improve sparse sign embeddings for higher values of , we can increase the value of the sparsity parameter . We recommend

With this higher sparsity level, the distortion closely tracks for all four test matrices:

### Conclusion

Implemented appropriately (see below), sparse sign embeddings can be faster than other sketching matrices by a wide margin. The parameter choice is enough to ensure the distortion closely tracks for most test matrices. For the toughest test matrices, a slightly larger sparsity parameter can be necessary to achieve the optimal distortion.

While these tests are far from comprehensive, they are consistent with the uniformly positive results for sparse sign embeddings reported in the literature. We believe that this evidence supports the argument that sparse sign embeddings are a sensible default sketching matrix for most purposes.

## Sparse Sign Embeddings: Theory and Practice

Given the highly appealing performance characteristics of sparse sign embeddings, it is worth saying a few more words about these embeddings and how they perform in both theory and practice.

Recall that a sparse sign embedding is a random matrix of the form

Each column is an independent and randomly generated to contain exactly nonzero entries in uniformly random positions. The value of each nonzero entry of is chosen to be either or with 50/50 odds.

### Parameter Choices

The goal of sketching is to reduce vectors of length to a smaller dimension . For linear algebra applications, we typically want to preserve all vectors in the column space of a matrix up to distortion :

To use sparse sign embeddings, we must choose the parameters appropriately:

Given a dimension and a target distortion , how do we pick and ?

Based on the experiments above (and other testing reported in the literature), we recommend the following parameter choices in practice:

The parameter choice is advocated by Tropp, Yurtever, Udell, and Cevher; they mention experimenting with parameter values as small as . The value has demonstrated deficiencies and should almost always be avoided (see below). The scaling is derived from the analysis of Gaussian embeddings. As Martinsson and Tropp argue, the analysis of Gaussian embeddings tends to be reasonably descriptive of other well-designed random embeddings.

The best-known theoretical analysis, due to Cohen, suggests more cautious parameter setting for sparse sign embeddings:

The main difference between Cohen’s analysis and the parameter recommendations above is the presence of the factor and the lack of explicit constants in the O-notation.

### Implementation

For good performance, it is imperative to store using either a purpose-built data structure or a sparse matrix format (such as a MATLAB sparse matrix or scipy sparse array).

If a sparse matrix library is unavailable, then either pursue a dedicated implementation or use a different type of embedding; sparse sign embeddings are just as slow as Gaussian embeddings if they are stored in an ordinary non-sparse matrix format!

Even with a sparse matrix format, it can require care to generate and populate the random entries of the matrix . Here, for instance, is a simple function for generating a sparse sign matrix in MATLAB:

function S = sparsesign_slow(d,n,zeta)
cols = kron((1:n)',ones(zeta,1)); % zeta nonzeros per column
vals = 2*randi(2,n*zeta,1) - 3; % uniform random +/-1 values
rows = zeros(n*zeta,1);
for i = 1:n
rows((i-1)*zeta+1:i*zeta) = randsample(d,zeta);
end
S = sparse(rows, cols, vals / sqrt(zeta), d, n);
end

Here, we specify the rows, columns, and values of the nonzero entries before assembling them into a sparse matrix using the MATLAB sparse command. Since there are exactly nonzeros per column, the column indices are easy to generate. The values are uniformly and can also be generated using a single line. The real challenge to generating sparse sign embeddings in MATLAB is the row indices, since each batch of row indices much be chosen uniformly at random between and without replacement. This is accomplished in the above code by a for loop, generating row indices at a time using the slow randsample function.

As its name suggests, the sparsesign_slow is very slow. To generate a sparse sign embedding with sparsity requires 53 seconds!

Fortunately, we can do (much) better. By rewriting the code in C and directly generating the sparse matrix in the CSC format MATLAB uses, generating this same 200 by 10 million sparse sign embedding takes just 0.4 seconds, a speedup of 130× over the slow MATLAB code. A C implementation of the sparse sign embedding that can be used in MATLAB using the MEX interface can be found in this file in the Github repo for my recent paper.

## Other Sketching Matrices

Let’s leave off the discussion by mentioning other types of sketching matrices not considered in the empirical comparison above.

### Coordinate Sampling

Another family of sketching matrices that we haven’t talked about are coordinate sampling sketches. A coordinate sampling sketch consists of indices and weights . To apply , we sample the indices and reweight them using the weights:

Coordinate sampling is very appealing: To apply to a matrix or vector requires no matrix multiplication of trigonometric transforms, just picking out some entries or rows and rescaling them.

In order for coordinate sampling to be effective, we need to pick the right indices. Below, we compare two coordinate sampling sketching approaches, uniform sampling and leverage score sampling (both with replacement), to the sparse sign embedding with the suggested parameter setting for the hard “Identity” test matrix used above.

We see right away that the uniform sampling fails dramatically on this problem. That’s to be expected. All but 50 of 100,000 rows of are zero, so picking rows uniformly at random will give nonsense with very high probability. Uniform sampling can work well for matrices which are “incoherent”, with all rows of being of “similar importance”.

Conclusion (Uniform sampling). Uniform sampling is a risky method; it works excellently for some problems, but fails spectacularly for others. Use only with caution!

The ridge leverage score sampling method is more interesting. Unlike all the other sketches we’ve discussed in this post, ridge leverage score sampling is data-dependent. First, it computes a leverage score for each row of and then samples rows with probabilities proportional to these scores. For high enough values of , ridge leverage score sampling performs slightly (but only slightly) worse than the characteristic scaling we expect for an oblivious subspace embedding.

Ultimately, leverage score sampling has two disadvantages when compared with oblivious sketching matrices:

• Higher distortion, higher variance. The distortion of a leverage score sketch is higher on average, and more variable, than an oblivious sketch, which achieve very consistent performance.
• Computing the leverage scores. In order to implement this sketch, the leverage scores have to first be computed or estimated. This is a nontrivial algorithmic problem; the most direct way of computing the leverage scores requires a QR decomposition at cost, much higher than other types of sketches.

There are settings when coordinate sampling methods, such as leverage scores, are well-justified:

• Structured matrices. For some matrices , the leverage scores might be very cheap to compute or approximate. In such cases, coordinate sampling can be faster than oblivious sketching.
• “Active learning”. For some problems, each entry of the vector or row of the matrix may be expensive to generate. In this case, coordinate sampling has the distinct advantage that computing or only requires generating the entries of or rows of for the randomly selected indices .

Ultimately, oblivious sketching and coordinate sampling both have their place as tools in the computational toolkit. For the reasons described above, I believe that oblivious sketching should usually be preferred to coordinate sampling in the absence of a special reason to prefer the latter.

### Tensor Random Embeddings

There are a number of sketching matrices with tensor structure; see here for a survey. These types of sketching matrices are very well-suited to tensor computations. If tensor structure is present in your application, I would put these types of sketches at the top of my list for consideration.

### CountSketch

The CountSketch sketching matrix is the case of the sparse sign embedding. CountSketch has serious deficiencies, and should only be used in practice with extreme care.

Consider the “Identity” test matrix from above but with parameter , and compare the distortion of CountSketch to the sparse sign embedding with parameters :

We see that the distortion of the CountSketch remains persistently high at 100% until the sketching dimension is taken , 20× higher than .

CountSketch is bad because it requires to be proportional to in order to achieve distortion . For all of the other sketching matrices we’ve considered, we’ve only required to be proportional to (or perhaps ). This difference between for CountSketch and for other sketching matrices is a at the root of CountSketch’s woefully bad performance on some inputs.3Here, the symbol is an informal symbol meaning “proportional to”.

The fact that CountSketch requires is simple to show. It’s basically a variant on the famous birthday problem. We include a discussion at the end of this post.4In fact, any oblivious sketching matrix with only 1 nonzero entry per column must have . This is Theorem 16 in the following paper.

There are ways of fixing the CountSketch. For instance, we can use a composite sketch , where is a CountSketch of size and is a Gaussian sketching matrix of size .5This construction is from this paper. For most applications, however, salvaging CountSketch doesn’t seem worth it; sparse sign embeddings with even nonzeros per column are already way more effective and reliable than a plain CountSketch.

## Conclusion

By now, sketching is quite a big field, with dozens of different proposed constructions for sketching matrices. So which should you use? For most use cases, sparse sign embeddings are a good choice; they are fast to construct and apply and offer uniformly good distortion across a range of matrices.

Bonus: CountSketch and the Birthday Problem
The point of this bonus section is to prove the following (informal) theorem:

Let be the “Identity” test matrix above. If is a CountSketch matrix with output dimension , then the distortion of for is with high probability.

Let’s see why. By the structure of the matrix , has the form

where each vector has a single in a uniformly random location .

Suppose that the indices are not all different from each other, say . Set , where is the standard basis vector with in position and zeros elsewhere. Then, but . Thus, for the distortion relation

to hold, . Thus,

For a moment, let’s put aside our analysis of the CountSketch, and turn our attention to a famous puzzle, known as the birthday problem:

How many people have to be in a room before there’s at least a 50% chance that two people share the same birthday?

The counterintuitive or “paradoxical” answer: 23. This is much smaller than many people’s intuition, as there are 365 possible birthdays and 23 is much smaller than 365.

The reason for this surprising result is that, in a room of 23 people, there are pairs of people. Each pair of people has a chance of sharing a birthday, so the expected number of birthdays in a room of 23 people is . Since are 0.69 birthdays shared on average in a room of 23 people, it is perhaps less surprising that 23 is the critical number at which the chance of two people sharing a birthday exceeds 50%.

Hopefully, the similarity between the birthday problem and CountSketch is becoming clear. Each pair of indices and in CountSketch have a chance of being the same. There are pairs of indices, so the expected number of equal indices is . Thus, we should anticipate is required to ensure that are distinct with high probability.

Let’s calculate things out a bit more precisely. First, realize that

To compute the probability that are distinct, imagine introducing each one at a time. Assuming that are all distinct, the probability are distinct is just the probability that does not take any of the values . This probability is

Thus, by the chain rule for probability,

To bound this quantity, use the numeric inequality for every , obtaining

Thus, we conclude that

Solving this inequality, we conclude that

This is a quantitative version of our informal theorem from earlier.

# Does Sketching Work?

I’m excited to share that my paper, Fast and forward stable randomized algorithms for linear least-squares problems has been released as a preprint on arXiv.

With the release of this paper, now seemed like a great time to discuss a topic I’ve been wanting to write about for a while: sketching. For the past two decades, sketching has become a widely used algorithmic tool in matrix computations. Despite this long history, questions still seem to be lingering about whether sketching really works:

In this post, I want to take a critical look at the question “does sketching work”? Answering this question requires answering two basic questions:

1. What is sketching?
2. What would it mean for sketching to work?

I think a large part of the disagreement over the efficacy of sketching boils down to different answers to these questions. By considering different possible answers to these questions, I hope to provide a balanced perspective on the utility of sketching as an algorithmic primitive for solving linear algebra problems.

## Sketching

In matrix computations, sketching is really a synonym for (linear) dimensionality reduction. Suppose we are solving a problem involving one or more high-dimensional vectors or perhaps a tall matrix . A sketching matrix is a matrix where . When multiplied into a high-dimensional vector or tall matrix , the sketching matrix produces compressed or “sketched” versions and that are much smaller than the original vector and matrix .

Let be a collection of vectors. For to be a “good” sketching matrix for , we require that preserves the lengths of every vector in up to a distortion parameter :

(1)

for every in .

For linear algebra problems, we often want to sketch a matrix . In this case, the appropriate set that we want our sketch to be “good” for is the column space of the matrix , defined to be

Remarkably, there exist many sketching matrices that achieve distortion for with an output dimension of roughly . In particular, the sketching dimension is proportional to the number of columns of . This is pretty neat! We can design a single sketching matrix which preserves the lengths of all infinitely-many vectors in the column space of .

## Sketching Matrices

There are many types of sketching matrices, each with different benefits and drawbacks. Many sketching matrices are based on randomized constructions in the sense that entries of are chosen to be random numbers. Broadly, sketching matrices can be classified into two types:

• Data-dependent sketches. The sketching matrix is constructed to work for a specific set of input vectors .
• Oblivious sketches. The sketching matrix is designed to work for an arbitrary set of input vectors of a given size (i.e., has elements) or dimension ( is a -dimensional linear subspace).

We will only discuss oblivious sketching for this post. We will look at three types of sketching matrices: Gaussian embeddings, subsampled randomized trignometric transforms, and sparse sign embeddings.

The details of how these sketching matrices are built and their strengths and weaknesses can be a little bit technical. All three constructions are independent from the rest of this article and can be skipped on a first reading. The main point is that good sketching matrices exist and are fast to apply: Reducing to requires roughly operations, rather than the operations we would expect to multiply a matrix and a vector of length .1Here, is big O notation.

### Gaussian Embeddings

The simplest type of sketching matrix is obtained by (independently) setting every entry of to be a Gaussian random number with mean zero and variance . Such a sketching matrix is called a Gaussian embedding.2Here, embedding is a synonym for sketching matrix.

Benefits. Gaussian embeddings are simple to code up, requiring only a standard matrix product to apply to a vector or matrix . Gaussian embeddings admit a clean theoretical analysis, and their mathematical properties are well-understood.

Drawbacks. Computing for a Gaussian embedding costs operations, significantly slower than the other sketching matrices we will consider below. Additionally, generating and storing a Gaussian embedding can be computationally expensive.

### Subsampled Randomized Trigonometric Transforms

The subsampled randomized trigonometric transform (SRTT) sketching matrix takes a more complicated form. The sketching matrix is defined to be a scaled product of three matrices

These matrices have the following definitions:

• is a diagonal matrix whose entries are each a random (chosen independently with equal probability).
• is a fast trigonometric transform such as a fast discrete cosine transform.3One can also use the ordinary fast Fourier transform, but this results in a complex-valued sketch.
• is a selection matrix. To generate , let be a random subset of , selected without replacement. is defined to be a matrix for which for every vector .

To store on a computer, it is sufficient to store the diagonal entries of and the selected coordinates defining . Multiplication of against a vector should be carried out by applying each of the matrices , , and in sequence, such as in the following MATLAB code:

% Generate randomness for S
signs = 2*randi(2,m,1)-3; % diagonal entries of D
idx = randsample(m,d); % indices i_1,...,i_d defining R

% Multiply S against b
c = signs .* b % multiply by D
c = dct(c) % multiply by F
c = c(idx) % multiply by R
c = sqrt(n/d) * c % scale

Benefits. can be applied to a vector in operations, a significant improvement over the cost of a Gaussian embedding. The SRTT has the lowest memory and random number generation requirements of any of the three sketches we discuss in this post.

Drawbacks. Applying to a vector requires a good implementation of a fast trigonometric transform. Even with a high-quality trig transform, SRTTs can be significantly slower than sparse sign embeddings (defined below).4For an example, see Figure 2 in this paper. SRTTs are hard to parallelize.5Block SRTTs are more parallelizable, however. In theory, the sketching dimension should be chosen to be , larger than for a Gaussian sketch.

### Sparse Sign Embeddings

A sparse sign embedding takes the form

Here, each column is an independently generated random vector with exactly nonzero entries with random values in uniformly random positions. The result is a matrix with only nonzero entries. The parameter is often set to a small constant like in practice.6This recommendation comes from the following paper, and I’ve used this parameter setting successfully in my own work.

Benefits. By using a dedicated sparse matrix library, can be very fast to apply to a vector (either or operations) to apply to a vector, depending on parameter choices (see below). With a good sparse matrix library, sparse sign embeddings are often the fastest sketching matrix by a wide margin.

Drawbacks. To be fast, sparse sign embeddings requires a good sparse matrix library. They require generating and storing roughly random numbers, higher than SRTTs (roughly numbers) but much less than Gaussian embeddings ( numbers). In theory, the sketching dimension should be chosen to be and the sparsity should be set to ; the theoretically sanctioned sketching dimension (at least according to existing theory) is larger than for a Gaussian sketch. In practice, we can often get away with using and .

### Summary

Using either SRTTs or sparse maps, a sketching a vector of length down to dimensions requires only to operations. To apply a sketch to an entire matrix thus requires roughly operations. Therefore, sketching offers the promise of speeding up linear algebraic computations involving , which typically take operations.

## How Can You Use Sketching?

The simplest way to use sketching is to first apply the sketch to dimensionality-reduce all of your data and then apply a standard algorithm to solve the problem using the reduced data. This approach to using sketching is called sketch-and-solve.

As an example, let’s apply sketch-and-solve to the least-squares problem:

(2)

We assume this problem is highly overdetermined with having many more rows than columns .

To solve this problem with sketch-and-solve, generate a good sketching matrix for the set . Applying to our data and , we get a dimensionality-reduced least-squares problem

(3)

The solution is the sketch-and-solve solution to the least-squares problem, which we can use as an approximate solution to the original least-squares problem.

Least-squares is just one example of the sketch-and-solve paradigm. We can also use sketching to accelerate other algorithms. For instance, we could apply sketch-and-solve to clustering. To cluster data points , first apply sketching to obtain and then apply an out-of-the-box clustering algorithms like k-means to the sketched data points.

## Does Sketching Work?

Most often, when sketching critics say “sketching doesn’t work”, what they mean is “sketch-and-solve doesn’t work”.

To address this question in a more concrete setting, let’s go back to the least-squares problem (2). Let denote the optimal least-squares solution and let be the sketch-and-solve solution (3). Then, using the distortion condition (1), one can show that

If we use a sketching matrix with a distortion of , then this bound tells us that

(4)

Is this a good result or a bad result? Ultimately, it depends. In some applications, the quality of a putative least-squares solution is can be assessed from the residual norm . For such applications, the bound (4) ensures that is at most twice . Often, this means is a pretty decent approximate solution to the least-squares problem.

For other problems, the appropriate measure of accuracy is the so-called forward error , measuring how close is to . For these cases, it is possible that might be large even though the residuals are comparable (4).

Let’s see an example, using the MATLAB code from my paper:

[A, b, x, r] = random_ls_problem(1e4, 1e2, 1e8, 1e-4); % Random LS problem
S = sparsesign(4e2, 1e4, 8); % Sparse sign embedding
sketch_and_solve = (S*A) \ (S*b); % Sketch-and-solve
direct = A \ b; % MATLAB mldivide

Here, we generate a random least-squares problem of size 10,000 by 100 (with condition number and residual norm ). Then, we generate a sparse sign embedding of dimension (corresponding to a distortion of roughly ). Then, we compute the sketch-and-solve solution and, as reference, a “direct” solution by MATLAB’s \.

We compare the quality of the sketch-and-solve solution to the direct solution, using both the residual and forward error:

fprintf('Residuals: sketch-and-solve %.2e, direct %.2e, optimal %.2e\n',...
norm(b-A*sketch_and_solve), norm(b-A*direct), norm(r))
fprintf('Forward errors: sketch-and-solve %.2e, direct %.2e\n',...
norm(x-sketch_and_solve), norm(x-direct))

Here’s the output:

Residuals: sketch-and-solve 1.13e-04, direct 1.00e-04, optimal 1.00e-04
Forward errors: sketch-and-solve 1.06e+03, direct 8.08e-07

The sketch-and-solve solution has a residual norm of , close to direct method’s residual norm of . However, the forward error of sketch-and-solve is nine orders of magnitude larger than the direct method’s forward error of .

Does sketch-and-solve work? Ultimately, it’s a question of what kind of accuracy you need for your application. If a small-enough residual is all that’s needed, then sketch-and-solve is perfectly adequate. If small forward error is needed, sketch-and-solve can be quite bad.

One way sketch-and-solve can be improved is by increasing the sketching dimension and lowering the distortion . Unfortunately, improving the distortion of the sketch is expensive. Because of the relation , to decrease the distortion by a factor of ten requires increasing the sketching dimension by a factor of one hundred! Thus, sketch-and-solve is really only appropriate when a low degree of distortion is necessary.

## Iterative Sketching: Combining Sketching with Iteration

Sketch-and-solve is a fast way to get a low-accuracy solution to a least-squares problem. But it’s not the only way to use sketching for least-squares. One can also use sketching to obtain high-accuracy solutions by combining sketching with an iterative method.

There are many iterative methods for least-square problems. Iterative methods generate a sequence of approximate solutions that we hope will converge at a rapid rate to the true least-squares solution, .

To using sketching to solve least-squares problems iteratively, we can use the following observation:

If is a sketching matrix for , then .

Therefore, if we compute a QR factorization

then

Notice that we used the fact that since has orthonormal columns. The conclusion is that .

Let’s use the approximation to solve the least-squares problem iteratively. Start off with the normal equations7As I’ve described in a previous post, it’s generally inadvisable to solve least-squares problems using the normal equations. Here, we’re just using the normal equations as a conceptual tool to derive an algorithm for solving the least-squares problem.

(5)

We can obtain an approximate solution to the least-squares problem by replacing by in (5) and solving. The resulting solution is

This solution will typically not be a good solution to the least-squares problem (2), so we need to iterate. To do so, we’ll try and solve for the error . To derive an equation for the error, subtract from both sides of the normal equations (5), yielding

Now, to solve for the error, substitute for again and solve for , obtaining a new approximate solution :

We can now go another step: Derive an equation for the error , approximate , and obtain a new approximate solution . Continuing this process, we obtain an iteration

(6)

This iteration is known as the iterative sketching method.8The name iterative sketching is for historical reasons. Original versions of the procedure involved taking a fresh sketch at every iteration. Later, it was realized that a single sketch suffices, albeit with a slower convergence rate. Typically, only having to sketch and QR factorize once is worth the slower convergence rate. If the distortion is small enough, this method converges at an exponential rate, yielding a high-accuracy least squares solution after a few iterations.

Let’s apply iterative sketching to the example we considered above. We show the forward error of the sketch-and-solve and direct methods as horizontal dashed purple and red lines. Iterative sketching begins at roughly the forward error of sketch-and-solve, with the error decreasing at an exponential rate until it reaches that of the direct method over the course of fourteen iterations. For this problem, at least, iterative sketching gives high-accuracy solutions to the least-squares problem!

To summarize, we’ve now seen two very different ways of using sketching:

• Sketch-and-solve. Sketch the data and and solve the sketched least-squares problem (3). The resulting solution is cheap to obtain, but may have low accuracy.
• Iterative sketching. Sketch the matrix and obtain an approximation to . Use the approximation to produce a sequence of better-and-better least-squares solutions by the iteration (6). If we run for enough iterations , the accuracy of the iterative sketching solution can be quite high.

By combining sketching with more sophisticated iterative methods such as conjugate gradient and LSQR, we can get an even faster-converging least-squares algorithm, known as sketch-and-precondition. Here’s the same plot from above with sketch-and-precondition added; we see that sketch-and-precondition converges even faster than iterative sketching does!

“Does sketching work?” Even for a simple problem like least-squares, the answer is complicated:

A direct use of sketching (i.e., sketch-and-solve) leads to a fast, low-accuracy solution to least-squares problems. But sketching can achieve much higher accuracy for least-squares problems by combining sketching with an iterative method (iterative sketching and sketch-and-precondition).

We’ve focused on least-squares problems in this section, but these conclusions could hold more generally. If “sketching doesn’t work” in your application, maybe it would if it was combined with an iterative method.

## Just How Accurate Can Sketching Be?

We left our discussion of sketching-plus-iterative-methods in the previous section on a positive note, but there is one last lingering question that remains to be answered. We stated that iterative sketching (and sketch-and-precondition) converge at an exponential rate. But our computers store numbers to only so much precision; in practice, the accuracy of an iterative method has to saturate at some point.

An (iterative) least-squares solver is said to be forward stable if, when run for a sufficient number of iterations, the final accuracy is comparable to accuracy of a standard direct method for the least-squares problem like MATLAB’s \ command or Python’s scipy.linalg.lstsq. Forward stability is not about speed or rate of convergence but about the maximum achievable accuracy.

The stability of sketch-and-precondition was studied in a recent paper by Meier, Nakatsukasa, Townsend, and Webb. They demonstrated that, with the initial iterate , sketch-and-precondition is not forward stable. The maximum achievable accuracy was worse than standard solvers by orders of magnitude! Maybe sketching doesn’t work after all?

Fortunately, there is good news:

• The iterative sketching method is provably forward stable. This result is shown in my newly released paper; check it out if you’re interested!
• If we use the sketch-and-solve method as the initial iterate for sketch-and-precondition, then sketch-and-precondition appears to be forward stable in practice. No theoretical analysis supporting this finding is known at present.9For those interested, neither iterative sketching nor sketch-and-precondition are backward stable, which is a stronger stability guarantee than forward stability. Fortunately, forward stability is a perfectly adequate stability guarantee for many—but not all—applications.

These conclusions are pretty nuanced. To see what’s going, it can be helpful to look at a graph:10For another randomly generated least-squares problem of the same size with condition number and residual .

The performance of different methods can be summarized as follows: Sketch-and-solve can have very poor forward error. Sketch-and-precondition with the zero initialization is better, but still much worse than the direct method. Iterative sketching and sketch-and-precondition with fair much better, eventually achieving an accuracy comparable to the direct method.

Put more simply, appropriately implemented, sketching works after all!

## Conclusion

Sketching is a computational tool, just like the fast Fourier transform or the randomized SVD. Sketching can be used effectively to solve some problems. But, like any computational tool, sketching is not a silver bullet. Sketching allows you to dimensionality-reduce matrices and vectors, but it distorts them by an appreciable amount. Whether or not this distortion is something you can live with depends on your problem (how much accuracy do you need?) and how you use the sketch (sketch-and-solve or with an iterative method).

# Markov Musings 4: Should You Be Lazy?

This post is part of a series, Markov Musings, about the mathematical analysis of Markov chains. See here for the first post in the series.

In the previous post, we proved the following convergence results for a reversible Markov chain

Here, denotes the distribution of the chain at time , denotes the stationary distribution, denotes the divergence, and denote the decreasingly ordered eigenvalues of the Markov transition matrix . To bound the rate of convergence to stationarity, we therefore must upper bound and lower bound .

Having to prove separate bounds for two eigenvalues is inconvenient. In the next post, we will develop tools to bound . But what should we do about Fortunately, there is a trick.

## It Pays to be Lazy

Call a Markov chain lazy if, at every step, the chain has at least a 50% chance of staying put. That is, for every .

Any Markov chain can be made into a lazy chain. At every step of the chain, flip a coin. If the coin is heads, perform a step of the Markov chain as normal, drawing the next step randomly according to the transition matrix . If the coin comes up tails, do nothing and stay put.

Algebraically, the lazy chain described in the previous paragraph corresponds to replacing the original transition matrix with the lazy transition matrix , where denotes the identity matrix. It is easy to see that the stationary distribution for is also a stationary distribution for :

The eigenvalues of the lazy transition matrix are

In particular, since all of the original eigenvalues are , all of the eigenvalues of the lazy chain are . Thus, the smallest eigenvalue of is always and thus, for the lazy chain, the convergence is controlled only by :

Since it can be easier to establish bounds on the second eigenvalue than on both the second and last, it can be convenient—for theoretical purposes especially—to work with lazy chains, using the construction above to enforce laziness if necessary.

## Should You be Lazy?

We’ve seen one theoretical argument for why it pays to use lazy Markov chains. But should we use lazy Markov chains in practice? The answer ultimately depends on how we want to use the Markov chain.

Here are two very different ways we could use a Markov chain:

• One-shot sampling. Run the Markov chain once for a fixed number of steps , and use the result as an approximate sample from .

For this use case, it is important that the output is genuinely a sample from , and the possibility of large negative eigenvalues can significantly degrade the convergence. In the extreme case where is an eigenvalue, the chain will fail to converge to stationarity at all. For this application, the lazy approach may make sense.

• Averaging. Another way we can use a Markov chain is to compute the average of value of a function over a random variable drawn from the stationary distribution:

To do this, we approximate this expectation by the time average of the value of over the first values of the chain

As we shall see, this averaging process converges at an (asymptotically) slower rate for the lazy chain than the original chain. Therefore, for this application, we should typically just run the chain as-is, and not worry about making it lazy.

## The Case Against Laziness

To see why being lazy hurts us for the averaging problem, we will use the Markov chain central limit theorem. As with many random averaging processes, we expect that in the limit of a large number of steps , the Markov chain average

will become approximately normally distributed. This is the content of the Markov chain central limit theorem (CLT):

Informal theorem (Markov chain central limit theorem). Let denote the states of a Markov chain initialized in the stationary distribution . For a large number of steps, is approximately normally distributed with mean and variance where

(1)

The Markov chain CLT shows that the variance of the Markov chain average depends not only on the variance of in the stationary distribution, but also the covariance between and later values . The faster the covariance decreases, the smaller will be and thus the smaller the error for the Markov chain average.

More formally, the Markov chain CLT is given by the following result:

Theorem (Markov chain central limit theorem). As ,

converges in distribution to a normal random variable with mean zero and variance , as defined in (1).

To compare the lazy and non-lazy chains, let’s compute the variance parameter in the Markov chain CLT in terms of the eigenvalues of the chain. For the remainder of this post, let be any function.

### Step 1: Spectral Decomposition

Recall that, by the spectral theorem, the transition matrix has eigenvectors that are orthonormal in the -inner product

As in the previous post, we think of vectors such as as defining a function . Thus, we can expand the function using the eigenvectors

(2)

The first eigenvector is the vector of all ones (or, equivalently, the function that outputs for every output).

### Step 2: Applying the Transition Matrix to a Function

The transition matrix is defined so that if the Markov chain has probability distribution at time , then the chain has distribution at time . In other words, multiplying by on the right advances a distribution forward one step in time. This leads us to ask, what is the interpretation of multiplying a function by on the left. That is, is there an interpretation to the matrix–vector product ?

Indeed, there is such an interpretation: The th coordinate of is the expectation of conditional on the chain starting at :

Similarly,

(3)

There’s a straightforward proof of this fact. Let denote a probability distribution which places 100% of the probability mass on the single site . The th entry of is

We know that is the state of the Markov chain after steps when initialized in the initial distribution . Thus,

This proves the formula (3).

### Step 3: Computing the Covariance

With the help of the formula for and the spectral decomposition of , we are ready to compute the covariances appearing in (1).Let . Then

(4)

Let’s first compute and . Since is the vector/function of all ones, we have

For the last equality, we use the spectral decomposition (2) along with the orthonormality of the eigenvectors .

Since we assume the chain is initialized in the stationary distribution , it remains in stationarity at time , , so we have .

Now, let’s compute . Use the law of total expectation to write

Now, invoke (3) and the definition of the -inner product to write

Finally, using the spectral decomposition, we obtain

Combining this formula with our earlier observation that and plugging into (4), we obtain

(5)

For the second equality, we use that the largest eigenvalue is , so entirely drops out of the covariance.

### Step 4: Wrapping it Up

With the formula (5) for the covariance in hand, we are ready to evaluate the variance parameter in the Markov chain CLT. First, note that the variance is

Therefore, combining this formula for the variance and the formula (5) for the covariance, we obtain

Now, apply the formula for the sum of an infinite geometric series to obtain

(6)

### Conclusion: Laziness is (Asymptotically) Bad for Averaging

Before we get to the conclusions, let’s summarize where we are. We are seeking to use Markov chain average

as an approximation to the stationary mean . The Markov chain central limit theorem shows that, for a large number of steps , the error is approximately normally distributioned with mean zero and variance . So to obtain accurate estimates, we want to be as small as possible.

After some work, we were able to derive a formula (6) for in terms of the eigenvalues of the transition matrix . The formula for for the lazy chain is identical, except with each eigenvalue replaced by . Thus, we have

From these formulas, it is clear that the lazy Markov chain has a larger parameter and is thus less accurate than the non-lazy Markov chain, no matter what the eigenvalues are.

To compare the non-lazy and lazy chains in another way, consider the plot below. The blue solid line shows the amplification factor of an eigenvalue , which represents amount by which the squared coefficient is scaled by in . In the red-dashed line, we see the corresponding amplification factor for the corresponding eigenvalue of the lazy chain. We see that at every value, the lazy chain has a higher amplification factor than the original chain.

Remember that our original motivation for using the lazy chain was to remove the possibility of slow convergence of the chain to stationarity because of negative eigenvalues. But for the averaging application, negative eigenvalues are great. The process of Markov chain averaging shrinks the influence of negative eigenvalues on , whereas positive eigenvalues are amplified. For the averaging application, negative eigenvalues for the chain are a feature, not a bug.

## Moral of the Story

Much of Markov chain theory, particularly in theoretical parts of computer science, mathematics, and machine learning, is centered around proving convergence to stationarity. Negative eigenvalues are a genuine obstruction to convergence to stationarity, and using the lazy chain in practice may be a sensible idea if one truly needs a sample from the stationary distribution in a given application.

But one-shot sampling is not the only or even the most common uses for Markov chains in computational practice. For other applications, such as averaging, the negative eigenvalues are actually a help. Using the lazy chain in practice for these problems would be a poor idea.

To me, the broader lesson in this story is that, as applied mathematicians, it can be inadvisable to become too fixated on one particular mathematical benchmark for designing and analyzing algorithms. Proving rapid convergence to stationarity with respect to the total variation distance is one nice way to analyze Markov chains. But it isn’t the only way, and chains not possessing this property because of large negative eigenvalues may actually be better in practice for some problems. Ultimately, applied mathematics should, at least in part, be guided by applications, and paying attention to how algorithms are used in practice should inform how we build and evaluate new ones.

# Markov Musings 3: Spectral Theory

This post is part of a series, Markov Musings, about the mathematical analysis of Markov chains. See here for the first post in the series.

In the previous two posts, we proved the fundamental theorem of Markov chains using couplings and discussed the use of couplings to bound the mixing time, the time required for the chain to mix to near-stationarity.

In this post, we will continue this discussion by discussing spectral methods for understanding the convergence of Markov chains.

## Mathematical Setting

In this post, we will study a reversible Markov chain on with transition probability matrix and stationary distribution . Our goal will be to use the eigenvalues and eigenvectors of the matrix to understand the properties of the Markov chain.

Throughout our discussion, it will be helpful to treat vectors and functions as being one and the same, and we will use both and to denote the th entry of the vector (aka the evaluation of the function at ). By adopting this perspective, a vector is not merely a list of numbers, but instead labels each state with a numeric value.

Given a function/vector , we let denote the expected value of where is drawn from :

The variance is defined similarly:

As a final piece of notation, we let denote a probability distribution which assigns 100% probability to outcome .

## Spectral Theory

The eigenvalues and eigenvectors of general square matrices are ill-behaved creatures. Indeed, a general matrix with real entries can have complex-valued eigenvalues and fail to possess a full suite of linearly independent eigenvectors. The situation is dramatically better for symmetric matrices which obey the spectral theorem:

Theorem (spectral theorem for real symmetric matrices). An real symmetric matrix (i.e., one satisfying ) has real eigenvalues and orthonormal eigenvectors .

Unfortunately for us, the transition matrix for a reversible Markov chain is not always symmetric. But despite this, there’s a surprise: always has real eigenvalues. This leads us to ask:

Why does are the eigenvalues of the transition matrix real?

To answer this question, we will need to develop and a more general version of the spectral theorem and use our standing assumption that the Markov chain is reversible.

### The Transpose

In our quest to develop a general version of the spectral theorem, we look more deeply into the hypothesis of the theorem, namely that is equal to its transpose . Let’s first ask: What does the transpose mean?

Recall that is equipped with the standard inner product, sometimes called the Euclidean or dot product. We denote this product by and it is defined as

We can think of the standard inner product as defining the amount of the vector that points in the direction of .

The transpose of a matrix is closely related to the standard inner product. Specifically, is the (unique) matrix satisfying the adjoint property:

So the amount of in the direction of is the same as the amount of in the direction of .

### The Adjoint and the General Spectral Theorem

Since the transpose is intimately related to the standard inner product on , it is natural to wonder if non-standard inner products on give rise to non-standard versions of the transpose. This idea proves to be true.

Definition (adjoint). Let be any inner product on and let be an matrix. The adjoint of with respect to the inner product is the (unique) matrix such that

The spectral theorem naturally extends to the more abstract setting of adjoints with respect to non-standard inner products:

Theorem (general spectral theorem). Let be an matrix which is set-adjoint with respect to an inner product . Then has real eigenvalues and eigenvectors which are orthonormal:

The general spectral theorem offers us a path to understand our observation from above that the eigenvalues of are always real. Namely, we ask:

Is there an inner product under which is self-adjoint?

Fortunately, the answer is yes. Define the -inner product

This inner product is very natural. If and are mean-zero (), it reports the covariance of and where is drawn from .

Let us compute the adjoint of the transition matrix in the -inner product :

Recall that we have assumed our Markov chain is reversible. Thus, the detailed balance condition

implies that

Thus, we conclude that is self-adjoint.

We cannot emphasize enough that the reversibility assumption is crucial to ensure that is self-adjoint. In fact,

Theorem (reversibility and self-adjointness). The transition matrix of a general Markov chain with stationary distribution is self-adjoint in the -inner product if and only if the chain is reversible.

### Spectral Theorem for Markov Chains

Since is self-adjoint in the -inner product, the general spectral theorem implies the following

Corollary (spectral theorem for reversible Markov chain). The reversible Markov transition matrix has real eigenvalues associated with eigenvectors which are orthonormal in the -inner product:

Since is a reversible Markov transition matrix—not just any self-adjoint matrix—the eigenvalues of satisfy additional properties:

• Bounded. All of the eigenvalues of lie between and .1Here’s an argument. The vectors span all of , where denotes a vector with in position and elsewhere. Then, by the fundamental theorem of Markov chains, converges to for every . In particular, for any vector , . Thus, since for every vector , all of the eigenvalues must be in magnitude.
• Eigenvalue 1. is an eigenvalue of with eigenvector , where is a vector of all one’s.

For a primitive chain, we can also have the property:

• Contractive. For a primitive chain, all eigenvalues other than have magnitude .2This follows for every , so must be the unique left eigenvector with eigenvalue of modulus .

## Distance Between Probability Distributions Redux

In the previous post, we used the total variation distance to compare probability distributions:

Definition (total variation distance). The total variation distance between probability distributions and is

The total variation distance is an way of comparing two probability distributions since it can be computed by adding the absolute difference between the probabilities and of each outcome. Spectral theory plays more nicely with an “” way of comparing probability distributions, which we develop now.

### Densities

Given two probability densities and , the density of with respect to is the function3Note that we typically associate probability distributions and with row vectors, whereas the density is a function which we identify with column vectors. For those interested and familiar with measure theory, here is a good reason why this makes sense. In the continuum setting, probability distributions and are measures whereas the density remains a function, known as the Radon–Nikodym derivative . This provides a general way of figuring out which objects for finite state space Markov chains are row vectors and which are column vectors: Measures are row vectors whereas functions are column vectors. given by

The density satisfies the property that

(1)

To see why we call a density, it may be helpful to appeal to continuous probability for a moment. If is a random variable with distribution , the probability density function of (with respect to the uniform distribution ) is a function such that

The formula for the continuous case is exactly the same as the finite case (1) with sums replaced with integrals .

### The Chi-Squared Divergence

We are now ready to introduce our “” way of comparing probability distributions.

Definition (-divergence). The -divergence of with respect to is the variance of the density function:

To see the last equality is a valid formula for , note that

(2)

### Relations Between Distance Measures

The divergence always gives an upper bound on the total variation distance. Indeed, first note that we can express the total variation distance as

Thus, using Lyapunov’s inequality, we conclude

(3)

## Markov Chain Convergence by Eigenvalues

Now, we prove a quantitative version of the fundamental theorem of Markov chains (for reversible processes) using spectral theory and eigenvalues:4Note that we used the fundamental theorem of Markov chains in above footnotes to prove the “bounded” and “contractive” properties, so, at present, this purported proof of the fundamental theorem would be circular. Fortunately, we can establish these two claims independently of the fundamental theorem, say, by appealing to the Perron–Frobenius theorem. Thus, this argument does give a genuine non-circular proof of the fundamental theorem.

Theorem (Markov chain convergence by eigenvalues). Let denote the distributions of the Markov chain at times . Then

This raises a natural question: How large can the initial divergence between and be? This is answered by the next result.

Proposition (Initial distance to stationarity). For any ,

(4)

For any initial distribution , we have

(5)

The condition (4) is a direct computation

For (5), observe that is a convex function of the initial distribution . Therefore, its maximal value over the convex set of all probability distribution is maximized at an extreme point for some . Consequently, (5) follows directly from (4).

Using the previous two results and equation (3), we immediately obtain the following:

Corollary (Mixing time). When initialized from a distribution , the chain mixes to -total variation distance to stationarity

after

(6)

Indeed,

Equating the left-hand side with and rearranging gives (6).

### Proof of Markov Chain Convergence Theorem

We conclude with a proof of the Markov chain convergence result.

### Recurrence for Densities

Our first task is to derive a recurrence for the densities . To do this, we use the recurrence for the probability distribution:

Thus,

We now invoke the detailed balance , which implies

The product is an ordinary matrix–vector product. Thus, we have shown

(7)

### Spectral Decomposition

Now that we have a recurrence for the densities , we can understand how the densities change in time. Expand the initial density in eigenvectors of :

As we verified above in (1), we have . Thus, we conclude . Since the are eigenvectors of with eigenvalues , the recurrence (7) implies

### Conclusion

Finally, we compute

The theorem is proven.