Vandermonde Matrices are Merely Exponentially Ill-Conditioned

I am excited to share that my paper Does block size matter in randomized block Krylov low-rank approximation? has recently been released on arXiv. In that paper, we study the randomized block Krylov iteration (RBKI) algorithm for low-rank approximation. Existing results show that RBKI is efficient at producing rank-k approximations with a large block size of k or a small block size of 1, but these results give poor results for intermediate block sizes 1\ll b \ll k. But often these intermediate block sizes are the most efficient in practice. In our paper, we close this theoretical gap, showing RBKI is efficient for any block size 1\le b \le k. Check out the paper for details!

In our paper, the core technical challenge is understanding the condition number of a random block Krylov matrix of the form

    \[K = \begin{bmatrix} G & AG & \cdots & A^{t-1}G \end{bmatrix},\]

where A is an n\times n real symmetric positive semidefinite matrix and G is an n\times (n/b) random Gaussian matrix. Proving this result was not easy, and our proof required several ingredients. In this post, I want to talk about just one of them: Gautschi’s bound on the conditioning of Vandermonde matrices. (Check out Gautschi’s original paper here.)

Evaluating a Polynomial and Vandermonde Matrices

Let us begin with one the humblest but most important characters in mathematics, the univariate polynomial

    \[p_a(u) = a_1 + a_2 u+ \cdots + a_t u^{t-1}.\]

Here, we have set the degree to be t-1 and have written the polynomial p_a(\cdot) parametrically in terms of its vector of coefficients a = (a_1,\ldots,a_t). The polynomials of degree at most t-1 form a linear space of dimension t, and the monomials 1,u,\ldots,u^{t-1} provide a basis for this space. In this post, we will permit the coefficients a \in \complex^t to be complex numbers.

Given a polynomial, we often wish to evaluate it at a set of inputs. Specifically, let \lambda_1,\ldots,\lambda_s \in \complex be s (distinct) input locations. If we evaluate p_a at each number, we obtain a list of (output) values, which we denote by f = (p_a(\lambda_1),\ldots,p_a(\lambda_s)) of s (distinct) values, each of which given by the formula

    \[p_a(\lambda_i) = \sum_{j=1}^t \lambda_i^{j-1} a_j.\]

Observe that the outputs f are a nonlinear function of the input values \lambda but a linear function of the coefficients a. We may call the mapping a \mapsto f the coefficients-to-values map.

Every linear transformation between vectors can be realized as a matrix–vector product, and the matrix for the coefficients-to-values map is called a Vandermonde matrix V. It is given by the formula

    \[V= \begin{bmatrix} 1 & \lambda_1 & \lambda_1^2 & \cdots & \lambda_1^{t-1} \\ 1 & \lambda_2 & \lambda_2^2 & \cdots & \lambda_2^{t-1} \\ 1 & \lambda_3 & \lambda_3^2 & \cdots & \lambda_3^{t-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \lambda_s & \lambda_s^2 & \cdots & \lambda_s^{t-1} \end{bmatrix} \in \complex^{s\times t}.\]

The Vandermonde matrix defines the coefficients-to-values map in the sense that f = Va.

Interpolating by a Polynomial and Inverting a Vandermonde Matrix

Going forward, let us set s = t so the number of locations \lambda = (\lambda_1,\ldots,\lambda_t) equals the number of coefficients a = (a_1,\ldots,a_t). The Vandermonde matrix maps the vector of coefficients a to the vector of values f = (f_1,\ldots,f_t) = (p_a(\lambda_1),\ldots,p_a(\lambda_t)). Its inverse V^{-1} maps a set of values f to a set of coefficients a defining a polynomial p_a which interpolates the values f:

    \[p_a(\lambda_i) = f_i \quad \text{for } i =1,\ldots,t .\]

More concisely, multiplying the inverse of a Vandermonde matrix solves the problem of polynomial interpolation.

To solve the polynomial interpolation problem with Vandermonde matrices, we can do the following. Given values f, we first solve the linear system of equations Va = f, obtaining a vector of coefficients a = V^{-1}f. Then, define the interpolating polynomial

(1)   \[q(u) = a_1 + a_2 u + \cdots + a_t u^{t-1} \quad \text{with } a = V^{-1} f. \]

The polynomial q interpolates the values f at the locations \lambda_i, q(\lambda_i) = p_a(\lambda_i) = f_i.

Lagrange Interpolation

Inverting a the Vandermonde matrix is one way to solve the polynomial interpolation problem, but the polynomial interpolation can also be solved directly. To do so, first notice that we can construct a special polynomial (u - \lambda_2)(u - \lambda_3)\cdots(u-\lambda_t) that is zero at the locations \lambda_2,\ldots,\lambda_t but nonzero at the first location \lambda_1. (Remember that we have assumed that \lambda_1,\ldots,\lambda_t are distinct.) Further, by rescaling this polynomial to

    \[\ell_1(u) = \frac{(u - \lambda_2)(u - \lambda_3)\cdots(u-\lambda_t)}{(\lambda_1 - \lambda_2)(\lambda_1 - \lambda_3)\cdots(\lambda_1-\lambda_t)},\]

we obtain a polynomial whose value at \lambda_1 can be set to 1. Likewise, for each i, we can define a similar polynomial

(2)   \[\ell_i(u) = \frac{(u - \lambda_1)\cdots(u-\lambda_{i-1}) (u-\lambda_{i+1})\cdots(u-\lambda_t)}{(\lambda_i - \lambda_1)\cdots(\lambda_i-\lambda_{i-1}) (\lambda_i-\lambda_{i+1})\cdots(\lambda_i-\lambda_t)}, \]

which is 1 at \lambda_i and zero at \lambda_j for j\ne i. Using the Dirac delta symbol, we may write

    \[\ell_i(\lambda_j) = \delta_{ij} = \begin{cases} 1, & i = j, \\ 0, & i\ne j. \end{cases}\]

The polynomials \ell_i are called the Lagrange polynomials of the locations \lambda_1,\ldots,\lambda_q. Below is an interactive illustration of the second Lagrange polynomial \ell_2 associated with the points \lambda_i = i (with t = 5).

With the Lagrange polynomials in hand, the polynomial interpolation problem is easy. To obtain a polynomial whose values are f, simply multiply each Lagrange polynomial \ell_i by the value f_i and sum up, obtaining

    \[q(u) = \sum_{i=1}^t f_i \ell_i(u).\]

The polynomial q interpolates the values f. Indeed,

(3)   \[q(\lambda_j) = \sum_{i=1}^t f_i \ell_i(\lambda_j) = \sum_{i=1}^t f_i \delta_{ij} = f_j. \]

The interpolating polynomial computed is shown in the interactive display below:

From Lagrange to Vandermonde via Elementary Symmetric Polynomials

We now have two ways of solving the polynomial interpolation problem, the Vandermonde way (1) and the Lagrange way (3). Ultimately, the difference between these formulas is one of basis: The Vandermonde formula (1) expresses the interpolating polynomial q as a linear combination of monomials 1,u,\ldots,u^{t-1} and the Lagrange formula (3) expresses q as a linear combination of the Lagrange polynomials \ell_1,\ldots,\ell_n. To convert between these formulas, we just need to express the Lagrange polynomial basis in the monomial basis.

To do so, let us examine the Lagrange polynomials more closely. Consider first the case t = 4, and consider the fourth unnormalized Lagrange polynomial

    \[(u - \lambda_1) (u - \lambda_2) (u - \lambda_3).\]

Expanding this polynomial in the monomials u^i, we obtain the expression

    \[(u - \lambda_1) (u - \lambda_2) (u - \lambda_3) = u^3 - (\lambda_1 + \lambda_2 + \lambda_3) u^2 + (\lambda_1\lambda_2 + \lambda_1\lambda_3 + \lambda_2\lambda_3)u - \lambda_1\lambda_2\lambda_3.\]

Looking at the coefficients of this polynomial in u, we recognize some pretty distinctive expressions involving the \lambda_i‘s:

    \begin{align*}\text{coefficient of $u^2$} &= - (\lambda_1 + \lambda_2 + \lambda_3), \\\text{coefficient of $u$} &= \lambda_1\lambda_2 + \lambda_1\lambda_3 + \lambda_2\lambda_3, \\\text{coefficient of $1$} &=  -\lambda_1\lambda_2\lambda_3.\end{align*}

Indeed, these expressions are special. Up to a plus-or-minus sign, they are called the elementary symmetric polynomials of the locations \lambda_i. Specifically, given numbers \mu_1,\ldots,\mu_s, the kth elementary symmetric polynomial e_k(\mu_1,\ldots,\mu_s) is defined as the sum of all products \mu_{i_1} \mu_{i_2} \cdots \mu_{i_k} of k values, i.e.,

    \[e_k(\mu_1,\ldots,\mu_{t-1}) = \sum_{i_1 < i_2 < \cdots < i_k} \mu_{i_1}\mu_{i_2}\cdots \mu_{i_k}.\]

The zeroth elementary symmetric polynomial is 1 by convention.

The elementary symmetric polynomials appear all the time in mathematics. In particular, they are the coefficients of the characteristic polynomial of a matrix and feature heavily in the theory of determinantal point processes. For our purposes, the key observation will be that the elementary symmetric polynomials appear whenever one expands out an expression like (u + \mu_1) \cdots (u + \mu_k):

Lemma 1 (Expanding a product of linear functions). It holds that

    \[(u + \mu_1) (u + \mu_2) \cdots (u+\mu_k) = \sum_{j=0}^k e_{k-j}(\mu_1,\ldots,\mu_k) u^j.\]

Using this fact, we obtain an expression for the Lagrange polynomials in the monomial basis. Let \lambda_{-i} = (\lambda_1,\ldots, \lambda_{i-1},\lambda_{i+1},\ldots,\lambda_t) denote the list of locations without \lambda_i. Then the ith Lagrange polynomial is given by

    \[\ell_i(u) = \frac{\sum_{j=1}^t e_{t-j}(-\lambda_{-i})u^{j-1}}{\prod_{k\ne i} (\lambda_i - \lambda_k)}.\]

Not exactly a beautiful expression, but it will get the job done.

Indeed, we can write the interpolating polynomial as

    \[q(u) = \sum_{i=1}^t f_i\ell_i(u) = \sum_{i=1}^t \sum_{j=1}^t f_i \frac{e_{t-j}(-\lambda_{-i})}{\prod_{k\ne i} (\lambda_i - \lambda_k)} u^{j-1}.\]

To make progress, we interchange the order of summation and regroup:

    \[q(u) = \sum_{j=1}^t \sum_{i=1}^t f_i \frac{e_{t-j}(-\lambda_{-i})}{\prod_{k\ne i} (\lambda_i - \lambda_k)} u^{j-1} = \sum_{j=1}^t \left(\sum_{i=1}^t \frac{e_{t-j}(-\lambda_{-i})}{\prod_{k\ne i} (\lambda_i - \lambda_k)}\cdot f_i \right) u^{j-1}.\]

We see that the coefficients of the interpolating polynomial (in the monomial basis) are

    \[a_j = \sum_{i=1}^t \frac{e_{t-j}(-\lambda_{-i})}{\prod_{k\ne i} (\lambda_i - \lambda_k)}\cdot f_i.\]

But we also know that the coefficients are given by a = V^{-1}f. Therefore, we conclude that the entries of the inverse-Vandermonde matrix V^{-1} are

(4)   \[(V^{-1})_{ji} = \frac{e_{t-j}(-\lambda_{-i})}{\prod_{k\ne i} (\lambda_i - \lambda_k)}. \]

Vandermonde Matrices are Merely Exponentially Ill-Conditioned

Vandermonde matrices are notoriously ill-conditioned, meaning that small changes to the values f can cause large changes to the coefficients a. On its face, this might seem like the problem of polynomial interpolation itself is ill-conditioned, but this is too hasty a conclusion. After all, it is only the mapping from values f to coefficients a in the monomial basis that is ill-conditioned. Fortunately, there are much better, more numerically stable bases for representing a polynomial like the Chebyshev polynomials.

But these more stable methods of polynomial interpolation and approximation are not the subject of this post: Here, our task is to will be to characterize just how ill-conditioned the computation of a = V^{-1}f is. To characterize this ill-conditioning, we will utilize the condition number of the matrix V. Given a norm \uinorm{\cdot}, the condition number of V is defined to be

    \[\kappa_{\uinorm{\cdot}}(V) = \uinorm{V} \uinorm{\smash{V^{-1}}}.\]

For this post, we will focus on the case where \uinorm\cdot} is chosen to be the (operator) 1norm, defined as

    \[\norm{A}_1= \max_{x \ne 0} \frac{\norm{Ax}_1}{\norm{x}_1} \quad \text{where } \norm{x}_1 = \sum_i |x_i|.\]

The 1-norm has a simple characterization: It is the maximum sum of the absolute values of the entries in any column

(5)   \[\norm{A}_\infty = \max_j \sum_i |A_{ij}|. \]

We shall denote the 1-norm condition number \kappa_1(\cdot).

Bounding \norm{V}_1 is straightforward. Indeed, setting M = \max_{1\le i \le t} |\lambda_i| and using (5), we compute

    \[\norm{V}_1= \max_{1\le j \le t} \sum_{i=1}^t |\lambda_i|^{j-1} \le \max_{1\le j \le t} tM^{j-1} = t\max\{1,M^{t-1}\}.\]

An even weaker, but still useful, bound is

    \[\norm{V}_1\le t(1+M)^{t-1}.\]

The harder task is bounding \norm{\smash{V^{-1}}}_1. Fortunately, we have already done most of the hard work needed to bound this quantity. Using our expression (4) for the entries of V^{-1} and using the formula (5) for the 1-norm, we have

    \[\norm{\smash{V^{-1}}}_1= \max_{1\le j \le t} \sum_{i=1}^t |(V^{-1})_{ij}| = \max_{1\le j \le t}\frac{ \sum_{i=1}^t |e_{t-i}(-\lambda_{-j})|}{\prod_{k\ne j} |\lambda_j- \lambda_k|}.\]

To bound this expression, we make use of the following “triangle inequality” for elementary symmetric polynomials

    \[|e_k(\mu_1,\ldots,\mu_s)| \le e_k(|\mu_1|,\ldots,|\mu_s|).\]

Using this bound and defining |\lambda_{-j}| = (|\lambda_1|,\ldots,|\lambda_{j-1}|,|\lambda_{j+1}|,\ldots,|\lambda_t|), we obtain

    \[\norm{\smash{V^{-1}}}_1\le \max_{1\le j \le t}\frac{ \sum_{i=1}^t e_{t-i}(|\lambda_{-j}|)}{\prod_{k\ne j} |\lambda_j- \lambda_k|}.\]

We now use the Lemma 1 with u = 1 to obtain the expression

(6)   \[\norm{\smash{V^{-1}}}_1\le \max_{1\le j \le t}\frac{ \prod_{k\ne j} (1 + |\lambda_k|)}{\prod_{k\ne j} |\lambda_j- \lambda_k|}. \]

Equation (6) is Gautschi’s bound on the norm of the inverse of a Vandermonde matrix, the result we were working towards proving.

Often, it is helpful to simply Gautschi’s bound a bit. Setting M = \max_{1\le i \le t} |\lambda_i| as above, the numerator is bounded as (1+M)^{t-1}. To bound the denominator, let \mathrm{gap} \coloneqq \min_{k \le j} |\lambda_j - \lambda_k| be the smallest distance between two locations. Using M and \mathrm{gap}, we can weaken the bound (6) to obtain

    \[\norm{\smash{V^{-1}}}_1\le \left(\frac{1+M}{\mathrm{gap}}\right)^{t-1}.\]

Combining this with our bound on \norm{V}_\infty from above, we obtain a bound on the condition number

    \[\kappa_1(V) \le t \left( \frac{(1+M)^2}{\mathrm{gap}} \right)^{t-1}.\]

We record these results:

Theorem 2 (Gautschi’s bound, simplified). Introduce M = \max_{1\le i \le t} |\lambda_i| and \mathrm{gap} \coloneqq \min_{k < j} |\lambda_j - \lambda_k|. Then

    \[\norm{\smash{V^{-1}}}_1\le \left(\frac{1+M}{\mathrm{gap}}\right)^{t-1}\]

and

    \[\kappa_1(V) \le t \left( \frac{(1+M)^2}{\mathrm{gap}} \right)^{t-1}.\]

Gautschi’s bound suggests that Vandermonde matrices can be very ill-conditioned, which is disappointing. But Gautschi’s bound also shows that Vandermonde matrices are merely exponentially ill-conditioned—that is, they are not worse than exponentially conditioned. The fact that Vandermonde matrices are only exponentially ill-conditioned plays a crucial role in our analysis of randomized block Krylov iteration.

Gautschi’s Bound as a Robust Version of the Fundamental Theorem of Algebra

The fundamental theorem of algebra states that a degree-(t-1) polynomial has precisely t-1 roots. Consequently, at t locations, it must be nonzero at least one. But how nonzero must the polynomial be at that one location? How large must it be? On this subject, the fundamental theorem of algebra is moot. However, Gautschi’s bound provides an answer.

To answer this question, we ask: What is the minimum possible size \norm{f}_1 of the values f = Va = (p_a(\lambda_1),\ldots,p_a(\lambda_t))? Well, if we set all the coefficients a = (0,\ldots,0) to zero, then f = 0 as well. So to avoid this trivial case, we should enforce a normalization condition on the coefficient vector a, say \norm{a}_1 = 1. With this setting, we are ready to compute. Begin by observing that

    \[\min_{\norm{a}_1 = 1} \norm{Va}_1 = \min_{a\ne 0} \frac{\norm{Va}_1}{\norm{a}_1}.\]

Now, we make the change of variables f = V^{-1}a, obtaining

    \[\min_{\norm{a}_1 = 1} \norm{Va}_1 = \min_{a\ne 0} \frac{\norm{Va}_1}{\norm{a}_1} = \min_{f\ne 0} \frac{\norm{f}_1}{\norm{\smash{V^{-1}f}}_1}.\]

Now, take the inverse of both sides to obtain

    \[\left(\min_{\norm{a}_1 = 1} \norm{Va}_1\right)^{-1} = \max_{f\ne 0} \frac{\norm{\smash{V^{-1}f}}_1}{\norm{f}_1} = \norm{\smash{V^{-1}}}_1.\]

Ergo, we conclude

    \[\min_{\norm{a}_1 = 1} \norm{Va}_1 = \norm{\smash{V^{-1}}}_1^{-1}.\]

Indeed, this relation holds for all operator norms:

Proposition 3 (Minimum stretch). For vector norm \uinorm{\cdot} and its induced operator norm \uinorm{\cdot}, it holds that for any square invertible matrix A that

    \[\min_{\uinorm{v} = 1} \uinorm{Av} = \min_{v\ne 0} \frac{\uinorm{Av}}{\uinorm{v}} = \uinorm{\smash{A^{-1}}}^{-1}.\]

Using this result, we obtain the lower bound \norm{f}_1\ge \norm{a}_1/\norm{\smash{V^{-1}}}_1 on the values f of a polynomial with coefficients a. Combining with Gautschi’s bound gives the following robust version of the fundamental theorem of algebra:

Theorem 4 (Robust fundamental theorem of algebra). Fix a polynomial p(u) = a_1 + a_2 t + \cdots + a_t u^{t-1} and locations \lambda_1,\ldots,\lambda_t. Define M = \max_{1\le i \le t} |\lambda_i| and \mathrm{gap} \coloneqq \min_{k \le j} |\lambda_j - \lambda_k|. Then

    \[|p(\lambda_1)| + \cdots + |p(\lambda_t)| \ge \left(\frac{\mathrm{gap}}{1+M}\right)^{t-1} (|a_1| + \cdots + |a_t|).\]

Thus, at t locations, a degree-(t-1) polynomial must be nonzero at least one point. In fact, the sum of the values at these t locations must be no worse than exponentially small in t.

Gaussian Integration by Parts

Gaussian random variables are wonderful, and there are lots of clever tricks for doing computations with them. One particularly nice tool is the Gaussian integration by parts formula, which I learned from my PhD advisor Joel Tropp. Here it is:

Gaussian integration by parts. Let z be a standard Gaussian random variable. Then \expect[zf(z)] = \expect[f'(z)].

This formula makes many basic computations effortless. For instance, to compute the second moment \expect[z^2] of a standard Gaussian random variable z, we apply the formula with f(x) = x to obtain

    \[\expect[z^2] = \expect[z f(z)] = \expect[f'(z)] = \expect[1] = 1.\]

Therefore, the second moment is one. Since z has mean zero, this also means that the variance of a standard Gaussian random variable is one.

The fourth moment is no harder to compute. Using f(x) = x^3, we compute

    \[\expect[z^4] = \expect[zf(z)] = \expect[f'(z)] = 3\expect[z^2] = 3.\]

Easy peesy. We’ve shown the fourth moment of a standard Gaussian variable is three—no fancy integral tricks required.

Iterating this trick, we can compute all the even moments of a standard Gaussian random variable. Indeed,

    \[\expect[z^{2p}] = \expect[z\cdot z^{2p-1}] = (2p-1) \expect[z^{2p-2}] = (2p-1)(2p-3) \expect[z^{2p-4}] = \cdots = (2p-1)(2p-3)\cdots 1.\]

We conclude that the (2p)th moment of a standard Gaussian random variable is (2p-1)!!, where !! indicates the (in)famous double factorial (2p-1)!! = (2p-1)\times(2p-3)\times\cdots \times 3 \times 1.

As a spicier application, let us now compute \expect[|z|]. To do so, we choose f(x) = \operatorname{sign}(x) to be the sign function:

    \[\operatorname{sign}(x) = \begin{cases} 1, & x > 0, \\ 0 & x = 0, \\ -1 & x < 0.\end{cases}\]

This function is not differentiable in a “Calculus 1” sense because it is discontinuous at zero. However, it is differentiable in a “distributional sense” and its derivative is f'(x) = 2\delta(x), where \delta denotes the famous Dirac delta “function”. We then compute

    \[\expect[|z|] = \expect[zf(z)] = \expect[f'(z)] = 2\expect[\delta(z)].\]

To compute \expect[\delta(z)], write the integral out using the probability density function \phi of the standard Gaussian distribution:

    \[\expect[|z|] = 2\expect[\delta(z)] = 2\int_{-\infty}^\infty \phi(x) \delta(x) \, \mathrm{d} x = 2\phi(0).\]

The standard Gaussian distribution has density

    \[\phi(x) = \frac{1}{\sqrt{2\pi}} \exp \left( - \frac{x^2}{2} \right),\]

so we conclude

    \[\expect[|z|] = 2 \cdot \frac{1}{\sqrt{2 \pi}} = \sqrt{\frac{2}{\pi}}}.\]

An unpleasant computation involving integrating against the Gaussian density has again been made trivial by using the Gaussian integration by parts formula.

Application: Power Method from a Random Start

As an application of the Gaussian integration by parts formula, we can analyze the famous power method for eigenvalue computations with a (Gaussian) random initialization. This discussion is adapted from the tutorial of Kireeva and Tropp (2024).

Setup

Before we can get to the cool application of the Gaussian integration by parts formula, we need to setup the problem and do a bit of algebra. Let A be a matrix, which we’ll assume for simplicity to be symmetric and positive semidefinite. Let \lambda_1 > \lambda_2 \ge \lambda_3 \ge \cdots \ge \lambda_n \ge 0 denote the eigenvalues of A. We assume the largest eigenvalue \lambda_1 is strictly larger than the next eigenvalue \lambda_2.

The power method computes the largest eigenvalue of A by repeating the iteration x \gets Ax / \norm{Ax}. After many iterations, x approaches an eigenvector of A and x^\top A x approaches an eigenvalue. Letting x^{(0)} denote the initial vector, the tth power iterate is x^{(t)} = A^t x^{(0)} / \norm{A^t x^{(0)}} and the tth eigenvalue estimate is

    \[\mu^{(t)} = \left(x^{(t)}\right)^\top Ax^{(t)}= \frac{\left(x^{(0)}\right)^\top A^{2t+1} x^{(0)}}{\left(x^{(0)}\right)^\top A^{2t} x^{(0)}}.\]

It is common to initialize the power method with a vector x^{(0)} with (independent) standard Gaussian random coordinates. In this case, the components z_1,\ldots,z_n of x^{(0)} in an eigenvector basis of A are also independent standard Gaussians, owing to the rotational invariance of the (standard multivariate) Gaussian distribution. Then the tth eigenvalue estimate is

    \[\mu^{(t)} = \frac{\sum_{i=1}^n \lambda_i^{2t+1} z_i^2}{\sum_{i=1}^n \lambda_i^{2t} z_i^2},\]

and the error of \mu^{(t)} as an approximation to the dominant eigenvalue \lambda_1 is

    \[\frac{\lambda_1 - \mu^{(t)}}{\lambda_1} = \frac{\sum_{i=1}^n (\lambda_1 - \lambda_i)/\lambda_1 \cdot \lambda_i^{2t} z_i^2}{\sum_{i=1}^n \lambda_i^{2t} z_i^2}.\]

Analysis

Having set everything up, we can now use the Gaussian integration by parts formula to make quick work of the analysis. To begin, observe that the quantity

    \[\frac{\lambda_1 - \lambda_i}{\lambda_1}\]

is zero for i = 1 and is at most one for i > 1. Therefore, the error is bounded as

    \[\frac{\lambda_1 - \mu^{(t)}}{\lambda_1} \le \frac{\sum_{i=2}^n \lambda_i^{2t} z_i^2}{\lambda_1^{2t} z_1^2 + \sum_{i=2}^n \lambda_i^{2t} z_i^2} = \frac{c^2}{\lambda_1^{2t} z_1^2 + c^2} \quad \text{for } c^2 = \sum_{i=2}^n \lambda_i^{2t} z_i^2.\]

We have introduced a parameter c to consolidate the terms in this expression not depending on z_1. Since z_1,\ldots,z_n are independent, z_1 and c are independent as well.

Now, let us bound the expected value of the error. First, we take an expectation \expect_{z_1} with respect to only the randomness in the first Gaussian variable z_1. Here, we use Gaussian integration by parts in the reverse direction. Introduce the function

    \[f'(x) = \frac{c^2}{\lambda_1^{2t}x^2 + c^2}.\]

This function is the derivative of

    \[f(x) = \frac{c}{\lambda_1^t} \arctan \left( \frac{\lambda_1^t}{c} \cdot x \right).\]

Thus, by the Gaussian integration by parts formula, we have

    \[\expect_{z_1}\left[\frac{\lambda_1 - \mu^{(t)}}{\lambda_1}\right] \le \expect_{z_1} \left[z_1 \cdot \frac{c}{\lambda_1} \arctan \left( \frac{\lambda_1^t}{c} \cdot z_1 \right) \right] = \expect_{z_1} \left[|z_1| \cdot \frac{c}{\lambda_1^t} \left|\arctan \left( \frac{\lambda_1^t}{c} \cdot z_1 \right)\right| \right].\]

In the last line, we observed that the bracketed quantity is nonnegative, so we are free to introduce absolute value signs. The arctangent function is always at most \pi/2, so we can bound

    \[\expect_{z_1}\left[\frac{\lambda_1 - \mu^{(t)}}{\lambda_1}\right] \le = \frac{\pi}{2} \cdot \frac{c}{\lambda_1^t} \cdot \expect_{z_1} \left[|z_1|\right] \le \sqrt{\frac{\pi}{2}} \cdot \frac{c}{\lambda_1^t}.\]

Here, we used our result from above that \expect[|z|] = \sqrt{2/\pi} for a standard Gaussian variable z.

We’re in the home stretch! We can bound c as

    \[c = \sqrt{\sum_{i=2} \lambda_i^{2t} z_i^2} \le \lambda_2^{t} \sqrt{\sum_{i=2}^n z_i^2} = \lambda_2^t \norm{(z_2,\ldots,z_n)}.\]

We see that c is at most \lambda_2^t times the length of a vector of n-1 standard Gaussian entries. As we’ve seen before on this blog, the expected length of a Gaussian vector with n-1 entries is at most \sqrt{n-1}. Thus, \expect[c] \le \lambda_2^t \sqrt{n-1}. We conclude that the expected error for power iteration is

    \[\expect\left[\frac{\lambda_1 - \mu^{(t)}}{\lambda_1}\right] = \expect_c\left[\expect_{z_1} \left[ \frac{\lambda_1 - \mu^{(t)}}{\lambda_1}  \right]\right] \le \sqrt{\frac{\pi}{2}} \cdot \expect\left[ \frac{c}{\lambda_1^t} \right] =\sqrt{\frac{\pi}{2}} \cdot \left(\frac{\lambda_2}{\lambda_1}\right)^t \cdot \sqrt{n-1} .\]

We see that the power iteration converges geometrically at a rate of at least (\lambda_2/\lambda_1)^t.

The first analyses of power iteration from a random start were done by Kuczyński and Woźniakowski (1992) and require pages of detailed computations involving integrals. This simplified analysis, due to Tropp (2020), makes the analysis effortless by comparison.

Randomized Kaczmarz: How Should You Sample?

The randomized Kaczmarz method is a method for solving systems of linear equations:

(1)   \[\text{Find $x$ such that } Ax = b. \]

Throughout this post, the matrix A will have dimensions n\times d. Beginning from an initial iterate x_0 = 0, randomized Kaczmarz works as follows. For t = 0,1,2,\ldots:

  • Sample a random row index i_t with probability \prob \{ i_t = j \} = p_j.
  • Update to enforce the equation a_{i_t}^\top x = b_{i_t} holds exactly:

        \[x_{t+1} \coloneqq x_t + \frac{b_{i_t} - a_{i_t}^\top x_t}{\norm{a_{i_t}}^2} a_{i_t}.\]

    Throughout this post, a_j^\top denotes the jth row of A.

What selection probabilities p_j should we use? The answer to this question may depend on whether the system (1) is consistent, i.e., whether it possesses a solution x. For this post, we assume (1) is consistent; see this previous post for a discussion of the inconsistent case.

The classical selection probabilities for randomized Kaczmarz were proposed by Strohmer and Vershynin in their seminal paper:

(1)   \[p_j = \frac{\norm{a_j}^2}{\norm{A}_{\rm F}^2} \quad \text{for } j = 1,2,\ldots,n. \]

Computing these selection probabilities requires a full pass over the matrix, which can be expensive for the largest problems.1This initial pass can sometimes be avoided by using rejection sampling, though. A computationally appealing alternative is to implement randomized Kaczmarz with uniform selection probabilities

(2)   \[p_j = \frac{1}{n} \quad \text{for } j = 1,2,\ldots,n. \]

Ignoring computational cost, which sampling rule leads to faster convergence: (1) or (2)?

Surprisingly, to me at least, the simpler strategy (2) often works better than (1). Here is a simple example. Define a matrix A \in \real^{20\times 20} with entries A_{ij} = \min(i,j)^2, and choose the right-hand side b\in\real^{20} with standard Gaussian random entries. The convergence of standard RK with sampling rule (1) and uniform RK with sampling rule (2) is shown in the plot below. After a million iterations, the difference in final accuracy is dramatic: the final relative error 0.00012 was uniform RK and 0.67 for standard RK!

Error for randomized Kaczmarz with both squared row norm sampling ("standard") and uniformly random rows on matrix with entries min(i,j)^2. Uniform randomized Kaczmarz achieves significantly smaller final error

In fairness, uniform RK does not always outperform standard RK. If we change the matrix entries to A_{ij} = \min(i,j), then the performance of both methods is similar, with both methods ending with a relative error of about 0.07.

Error for randomized Kaczmarz with both squared row norm sampling ("standard") and uniformly random rows on matrix with entries min(i,j). Uniform randomized Kaczmarz and standard randomized Kaczmarz achieve comparable final errors

Another experiment, presented in section 4.1 of Strohmer and Vershynin’s original paper, provides an example where standard RK converges a bit more than twice as fast as uniform RK (called “simple RK” in their paper). Still, taken all together, these experiments demonstrate that standard RK (sampling probabilities (1)) is often not dramatically better than uniform RK (sampling probabilities (2)), and uniform RK can be much better than standard RK.

Randomized Kaczmarz Error Bounds

Why does uniform RK often outperform standard RK? To answer these questions, let’s look at the error bounds for the RK method.

The classical analysis of standard RK shows the method is geometrically convergent

(3)   \[\expect\left[ \norm{x_t - x_\star}^2 \right] \le (1 - \kappa_{\rm dem}(A)^{-2})^t \norm{x_\star}^2. \]

Here,

(4)   \[\kappa_{\rm dem}(A) = \frac{\norm{A}_{\rm F}}{\sigma_{\rm min}(A)} = \sqrt{\sum_i \left(\frac{\sigma_i(A)}{\sigma_{\rm min}(A)}\right)^2} \]

is known as the Demmel condition number and \sigma_i(A) are the singular values of A. Recall that we have assumed the system Ax = b is consistent, possessing a solution x_\star. If there are multiple solutions, we let x_\star denote the solution of minimum norm.

What about uniform RK? Let D_A = \diag ( \norm{a_i}^{-1} : i=1,\ldots,n ) denote a diagonal matrix containing the inverse row norms, and introduce the row-equilibrated matrix D_A A. The row-equilibrated matrix D_A A has been obtained from A by rescaling each of its rows to have unit norm.

Uniform RK can then be related to standard RK run on the row-equilibrated matrix:

Fact (uniform sampling and row equilibration): Uniform RK on the system Ax = b produces the same sequence of (random) iterates \hat{x}_t as standard RK applied to the row-equilibrated system (D_A A)x = D_A b.

Therefore, by (3), the iterates \hat{x}_t of uniform RK satisfy the bound

(5)   \[\expect\left[ \norm{\hat{x}_t - x_\star}^2 \right] \le (1 - \kappa_{\rm dem}(D_A A)^{-2})^t \norm{x_\star}^2.\]

Thus, at least using the error bounds (3) and (5), whether standard or uniform RK is better depends on which matrix has a smaller Demmel condition number: A or D_A A.

Row Equilibration and the Condition Number

Does row equilibration increase or decrease its condition number? What is the optimal way of scaling the rows of a matrix to minimize its condition number? These are classical questions in numerical linear algebra, and they were addressed in a classical 1969 paper of van der Sluis. These results were then summarized and generalized in Higham’s delightful monograph Accuracy and Stability of Numerical Algorithms. Here, we present answers to these questions using a variant of van der Sluis’ argument.

First, let’s introduce some more concepts and notation. Define the spectral norm condition number

    \[\kappa(A) \coloneqq \frac{\sigma_{\rm max}(A)}{\sigma_{\rm min}(A)}\]

The spectral norm and Demmel condition numbers are always comparable \kappa(A) \le \kappa_{\rm dem}(A)\le \sqrt{\min(n,d)}\cdot \kappa(A). Also, let \mathrm{Diag} denote the set of all (nonsingular) diagonal matrices.

Our first result shows us that row equilibration never hurts the Demmel condition number by much. In fact, the row-equilibrated matrix produces a nearly optimal Demmel condition number when compared to any row scaling:

Theorem 1 (Row equilibration is a nearly optimal row scaling). Let A\in\real^{n\times d} be wide n\le d and full-rank, and let D_AA denote the row-scaling of A to have unit row norms. Then

    \[\kappa_{\rm dem}(D_AA) \le \sqrt{n}\cdot \min_{D \in \mathrm{Diag}} \kappa (DA) \le \sqrt{n}\cdot \min_{D \in \mathrm{Diag}} \kappa_{\rm dem} (DA).\]

By scaling the rows of a square or wide matrix to have unit norm, we bring the Demmel condition number to within a \sqrt{n} factor of the optimal row scaling. In fact, we even bring the Demmel condition number to within a \sqrt{n} factor of the optimal spectral norm condition number for any row scaling.

Since the convergence rate for randomized Kaczmarz is \kappa_{\rm dem}(A)^{-2}, this result shows that implementing randomized Kaczmarz with uniform sampling yields to a convergence rate that is within a factor of n of the optimal convergence rate using any possible sampling distribution.

This result shows us that row equilibration can’t hurt the Demmel condition number by much. But can it help? The following proposition shows that it can help a lot for some problems.

Proposition 2 (Row equilibration can help a lot). Let A\in\real^{n\times d} be wide n\le d and full-rank, and let \gamma denote the maximum ratio between two row norms:

    \[\gamma \coloneqq \frac{ \max_i \norm{a_i}}{\min_i \norm{a_i}}.\]

Then the Demmel condition number of the original matrix A satisfies

    \[\kappa_{\rm dem}(A) \le \gamma \cdot \kappa_{\rm dem}(D_A A).\]

Moreover, for every \gamma\ge 1, there exists a matrix A_\gamma where this bound is nearly attained:

    \[\kappa_{\rm dem}(A_\gamma) \ge \sqrt{1-\frac{1}{n}} \cdot \gamma \cdot \kappa_{\rm dem}(D_{A_\gamma}A_\gamma).\]

Taken together, Theorem 1 and Proposition 2 show that row equilibration often improves the Demmel condition number, and never increases it by that much. Consequently, uniform RK often converges faster than standard RK for square (or short wide) linear systems, and it never converges much slower.

Proof of Theorem 1

We follow Higham’s approach. Each of the n rows of D_AA each have unit norm, so

(7)   \[\norm{D_AA}_{\rm F} = \sqrt{n}.\]

The minimum singular value of D_A A can be written in terms of the Moore–Penrose pseudoinverse (D_A A)^\dagger = A^\dagger D_A^{-1} as follows

    \[\frac{1}{\sigma_{\rm min}(D_A A)} = \norm{A^\dagger D_A^{-1}}.\]

Here, \norm{\cdot} denotes the spectral norm. Then for any nonsingular diagonal matrix D, we have

(8)   \[\frac{1}{\sigma_{\rm min}(D_A A)} = \norm{A^\dagger D^{-1} (DD_A^{-1})} \le \norm{A^\dagger D^{-1}} \norm{DD_A^{-1}} = \frac{\norm{DD_A^{-1}}}{\sigma_{\rm min}(DA)}. \]

Since the matrix DD_A^{-1} is diagonal its spectral norm is

    \[\norm{DD_A^{-1}} = \max \left\{ \frac{|D_{ii}|}{|(D_A)_{ii}|} : 1\le i \le n \right\}.\]

The diagonal entries of D_A are \norm{a_i}^{-1}, so

    \[\norm{DD_A^{-1}} = \max \left\{ |D_{ii}|\norm{a_i} : 1\le i\le n \right\}\]

is the maximum row norm of the scaled matrix DA. The maximum row norm is always less than the largest singular value of DA, so \norm{DD_A^{-1}} \le \sigma_{\rm max}(DA). Therefore, combining this result, (7), and (9), we obtain

    \[\kappa_{\rm dem}(D_AA) \le \sqrt{n} \cdot \frac{\sigma_{\rm max}(DA)}{\sigma_{\rm min}(DA)} = \sqrt{n}\cdot \kappa (DA).\]

Since this bound holds for every D \in \mathrm{Diag}, we are free to minimize over D, leading to the first inequality in the theorem:

    \[\kappa_{\rm dem}(D_AA) \le \sqrt{n}\cdot \min_{D \in \mathrm{Diag}} \kappa (DA).\]

Since the spectral norm condition number is smaller than the Demmel condition number, we obtain the second bound in the theorem.

Proof of Proposition 2

Write A = D_A^{-1}(D_AA). Using the Moore–Penrose pseudoinverse again, write

(10)   \[\kappa_{\rm dem}(A) = \norm{D_A^{-1}(D_AA)}_{\rm F} \norm{(D_A A)^\dagger D_A}.\]

The Frobenius norm and spectral norm satisfy a (mixed) submultiplicative property

    \[\norm{BC}_{\rm F} \le \norm{B}\norm{C}_{\rm F}, \quad \norm{BC} \le\norm{B}\norm{C}.\]

Applying this result to (1), we obtain

    \[\kappa_{\rm dem}(A) \le \norm{D_A^{-1}}\norm{D_AA}_{\rm F} \norm{(D_A A)^\dagger} \norm{D_A}.\]

We recognize \gamma = \norm{D_A^{-1}}\norm{D_A} and \kappa_{\rm dem}(D_A A) = \norm{D_AA}_{\rm F} \norm{(D_A A)^\dagger}. We conclude

    \[\kappa_{\rm dem}(A) \le \gamma \cdot \kappa_{\rm dem}(D_A A).\]

To show this bound is nearly obtained, introduce A_\gamma = \diag(\gamma,\gamma,\ldots,\gamma,1). Then D_{A_\gamma} A_\gamma = I with \kappa_{\rm dem}(D_{A_\gamma}A_{\gamma}) = \sqrt{n} and

    \[\kappa_{\rm dem}(A_\gamma) = \frac{\norm{A_{\gamma}}_{\rm F}}{\sigma_{\rm min}(A_\gamma)} = \frac{\sqrt{(n-1)\gamma^2+1}}{1} \ge \sqrt{n} \cdot \sqrt{1-\frac{1}{n}} \cdot \gamma.\]

Therefore,

    \[\kappa_{\rm dem}(A_\gamma) \ge \sqrt{1-\frac{1}{n}} \cdot \gamma \cdot \kappa_{\rm dem}(D_{A_\gamma}A_\gamma).\]

Practical Guidance

What does this theory mean for practice? Ultimately, single-row randomized Kaczmarz is often not the best algorithm for the job for ordinary square (or short–wide) linear systems, anyway—block Kaczmarz or (preconditioned) Krylov methods have been faster in my experience. But, supposing that we have locked in (single-row) randomized Kaczmarz as our algorithm, how should we implement it?

This question is hard to answer, because there are examples where standard RK and uniform RK both converge faster than the other. Theorem 1 suggests uniform RK can require as many as n\times more iterations than standard RK on a worst-case example, which can be a big difference for large problems. But, particularly for badly row-scaled problems, Proposition 2 shows that uniform RK can dramatically outcompete standard RK. Ultimately, I would give two answers.

First, if the matrix A has already been carefully designed to be well-conditioned and computing the row norms is not computationally burdensome, then standard RK may be worth the effort. Despite this theory suggesting it can do quite badly, it took a bit of effort to construct a simple example of a “bad” matrix where uniform RK significantly outcompeted standard RK. (On most examples I constructed, the rate of convergence of the two methods were similar.)

Second, particularly for the largest systems where you only want to make a small number of total passes over the matrix, expending a full pass over the matrix to compute the row norms is a significant expense. And, for poorly row-scaled matrices, sampling using the squared row norms can hurt the convergence rate. Based on these observations, given a matrix of unknown row scaling and conditioning or given a small budget of passes over the matrix, I would use the uniform RK method over the standard RK method.

Finally, let me again emphasize that the theoretical results Theorem 1 and Proposition 2 only apply to square or wide matrices A. Uniform RK also appears to work for consistent systems with a tall matrix, but I am unaware of a theoretical result comparing the Demmel condition numbers of D_AA and A that applies to tall matrices. And for inconsistent systems of equations, it’s a whole different story.

Edit: After initial publication of this post, Mark Schmidt shared that the observation that uniform RK can outperform standard RK was made nearly ten years ago in section 4.2 of the following paper. They support this observation with a different mathematical justification

The Schur Product Theorem

The Schur product theorem states that the entrywise product A\circ M of two positive semidefinite matrices is also positive semidefinite. This post will present every proof I know for this theorem, and I intend to edit it to add additional proofs if I learn of them. (Please reach out if you know another!) My goal in this post is to be short and sweet, so I will assume familiarity with many properties for positive semidefinite matrices.

For this post, a matrix A\in\real^{n\times n} is positive semidefinite (psd, for short) if it is symmetric and satisfies x^\top Ax\ge 0 for all vectors x\in\real^n. All matrices in this post are real, though the proofs we’ll consider also extend to complex matrices. The entrywise product will be denoted \circ and is defined as (A\circ M)_{ij} = A_{ij}M_{ij}. The entrywise product is also known as the Hadamard product or Schur product.

It is also true that the entrywise product of two positive definite matrices is positive definite. The interested reader may be interested in seeing which of the proofs also yield this result.

Proof 1: Trace formula

We start by computing x^\top (A\circ M)x:

    \[x^\top (A\circ M)x = \sum_{i,j=1}^n x_i (A\circ M)_{ij} x_j = \sum_{i,j=1}^n x_i A_{ij} M_{ij} x_j.\]

Now, we may rearrange the sum, use symmetry of M, and repackage it as a trace

    \[x^\top (A\circ M)x = \sum_{i,j=1}^n x_i A_{ij} x_j M_{ji} = \tr(\operatorname{diag}(x) A \operatorname{diag}(x) M).\]

This the trace formula for quadratic forms in the Schur product.

Recall that a matrix A is psd if and only if it A is a Gram matrix (able to be expressed as A = B^\top B). Thus, we may write A = B^\top B and M = C^\top C. Substituting these expressions in the trace formula and invoking the cyclic property of the trace, we get

    \[x^\top (A\circ M)x = \tr(\operatorname{diag}(x) B^\top B \operatorname{diag}(x) C^\top C) = \tr(C\operatorname{diag}(x) B^\top B \operatorname{diag}(x) C^\top).\]

The matrix on the right-hand side has the expression

    \[C\operatorname{diag}(x) B^\top B \operatorname{diag}(x) C^\top = G^\top G \quad \text{for } G = B \operatorname{diag}(x) C^\top.\]

Therefore, it is psd and so its trace is psd:

    \[x^\top (A\circ M)x = \tr(G^\top G) \ge 0.\]

We have shown x^\top (A\circ M)x\ge 0 for every vector x, so A\circ M is psd.

Proof 2: Gram matrix

Since A and M are psd, they may be written as A = B^\top B and M = C^\top C. Letting b_i^\top and c_i^\top denote the ith rows of B and C, we have

    \[A = \sum_i b_ib_i^\top \quad \text{and} \quad M = \sum_j c_jc_j^\top.\]

Computing the Schur product and distributing, we have

    \[A\circ M = \sum_{i,j} (b_ib_i^\top \circ c_jc_j^\top).\]

The Schur product of rank-one matrices b_ib_i^\top and c_jc_j^\top is, by direct computation, (b_i\circ c_j)(b_i\circ c_j)^\top. Thus,

    \[A\circ M = \sum_{i,j} (b_i\circ c_j)(b_i\circ c_j)^\top\]

is a sum of (rank-one) psd matrices and is thus psd.

Proof 3: Covariances

Let x and y be independent random vectors with zero mean and covariance matrices A and M. The vector x\circ y is seen to have zero mean as well. Thus, the ij entry of the covariance matrix \Cov(x\circ y) of x\circ y is

    \[\expect[x_iy_ix_jy_j] = \expect[x_ix_j] \expect[y_iy_j] = A_{ij} M_{ij} = (A\circ M)_{ij}.\]

The second equality is the independence of x and y, and the third equality uses the fact that A and M are the covariance matrices of x and y. Thus, the covariance matrix of x\circ y is A\circ M. All covariance matrices are psd, so A\circ M is psd as well.1One I first saw this proof, I found it almost magical. Upon closer inspection, however, proof 3 is seen to be essentially a variant of proof 2 where sums A=\sum_i b_ib_i^\top have replaced by expectations A = \expect [xx^\top].

Proof 4: Kronecker product

The Kronecker product A\otimes M of two psd matrices is psd. The entrywise product A\circ M is a principal submatrix of A\otimes M:

    \[A\circ M = ((A\otimes M)_{(i+n(i-1))(i+n(i-1))} : i = 1,\ldots,n).\]

All principal submatrices of a psd matrix are psd, so A\circ M is psd.

My Favorite Proof of the Cauchy–Schwarz Inequality

The Cauchy–Schwarz inequality has many proofs. Here is my favorite, taken from Chapter 3 of The Schur complement and its applications; the book is edited by Fuzhen Zhang, and this chapter was contributed by him as well. Let x,y \in \real^n be vectors, assemble the matrix B = \onebytwo{x}{y}, and form the Gram matrix

    \[A = B^\top B = \onebytwo{x}{y}^\top \onebytwo{x}{y} = \twobytwo{\norm{x}^2}{y^\top x}{x^\top y}{\norm{y}^2}.\]

Since A is a Gram matrix, it is positive semidefinite. Therefore, its determinant is nonnegative:

    \[\det(A) = \norm{x}^2\norm{y}^2 - |x^\top y|^2 \ge 0.\]

Rearrange to obtain the Cauchy–Schwarz inequality

    \[|x^\top y| \le \norm{x}\norm{y}.\]

Equality occurs if and only if A is a rank-one matrix, which occurs if and only if x and y are scalar multiples.

I like this proof because it is perhaps the simplest example of the (block) matrix technique for proving inequalities. Using this technique, one proves inequalities about scalars (or matrices) by embedding them in a clever way into a (larger) matrix. Here is another example of the matrix technique, adapted from the proof of Theorem 12.9 of these lecture notes by Joel Tropp. Jensen’s inequality is a far-reaching and very useful inequality in probability theory. Here is one special case of the inequality.

Proposition (Jensen’s inequality for the inverse): Let x_1,\ldots,x_n be strictly positive numbers. Then the inverse of their average is no bigger than the average of their inverses:

    \[\left( \frac{1}{n} \sum_{i=1}^n x_i \right)^{-1} \le \frac{1}{n} \sum_{i=1}^n \frac{1}{x_i}.\]

To prove this result, embed each x_i into a 2\times 2 positive semidefinite matrix \twobytwo{x_i}{1}{1}{1/x_i}. Taking the average of all such matrices, we observe that

    \[A = \twobytwo{\frac{1}{n} \sum_{i=1}^n x_i}{1}{1}{\frac{1}{n} \sum_{i=1}^n \frac{1}{x_i}}\]

is positive semidefinite as well. Thus, its determinant is nonnegative:

    \[\det(A) = \left(\frac{1}{n} \sum_{i=1}^n x_i\right) \left(\frac{1}{n} \sum_{i=1}^n \frac{1}{x_i}\right) - 1 \ge 0.\]

Rearrange to obtain

    \[\left( \frac{1}{n} \sum_{i=1}^n x_i \right)^{-1} \le \frac{1}{n} \sum_{i=1}^n \frac{1}{x_i}.\]

Remarkably, we have proven a purely scalar inequality by appeals to matrix theory.

The matrix technique for proving inequalities is very powerful. Check out Chapter 3 of The Schur complement and its applications for many more examples.


If you like this blog and want another way to follow it, I am starting newsletter:

Sign up to receive new blog posts by email!

* indicates required

Intuit Mailchimp

Rejection Sampling

I am delighted to share that my paper Embrace rejection: Kernel matrix approximation with randomly pivoted Cholesky has been released on arXiv, joint with Joel Tropp and Rob Webber.

On the occasion of this release, I want to talk about rejection sampling, one of the most powerful ideas in probabilistic computing. My goal is to provide an accessible and general introduction to this technique, with a more “applied math” rather than statistical perspective. I will conclude with some discussion of how we use it in our paper.

Motivating Application: Randomized Kaczmarz

Consider the task of solving a linear system of equations Ax = b, where A \in \real^{n\times d} is a matrix and b \in \real^n is a vector. For simplicity, we assume that this system is consistent (i.e., there exists a solution x_\star such that Ax_\star = b). The randomized Kaczmarz algorithm is a mathematically elegant algorithm for solving this problem, due to Strohmer and Vershynin. Beginning with initial solution x \gets 0, randomized Kaczmarz repeatedly performs the following steps:

  1. Sample. Generate a random row index I according to the probability distribution \prob \{I = i\} = \norm{a_i}^2 / \norm{A}_{\rm F}^2. Here, and going forward, a_i^\top denotes the ith row of A and \norm{\cdot}_{\rm F} is the Frobenius norm.
  2. Update. Set x \gets x + (b_I - a_I^\top x)\cdot a_I / \norm{a_I}^2.

Convergence theory of the randomized Kaczmarz iteration is very well-studied; see, e.g., section 4.1 of this survey for an introduction.

For this post, we will consider a more basic question: How can we perform the sampling step 1? If n and d are not very large, this is an easy problem: Simply sweep through all the rows a_i and compute their norms \norm{a_i}^2. Then, sampling can be done using any algorithm for sampling from a weighted list of items. But what if n is very large? Suppose, even, that it is computationally expensive to read through the matrix, even once, to compute the row norms. What can we do now?

Here is one solution is using rejection sampling, due to Needell, Ward, and Srebro. Suppose that we know a bound on the row norms of A:

(1)   \[\norm{a_i}^2 \le B \quad \text{for each } i = 1,\ldots,n.\]

For instance, if the entries of A take values in the range [-1,1], then (1) holds with B = d. To sample a row i with probability \norm{a_i}^2/\norm{A}_{\rm F}^2, we perform the following the rejection sampling procedure:

  1. Propose. Draw a random row index 1\le J\le n uniformly at random (i.e., J is equally likely to be any row index between 1 and n.)
  2. Accept/reject. Make a random choice: With probability p_J = \norm{a_J}^2/B, accept and set I\gets J. With the remaining probability 1-p_J, reject and return to step 1.

Why does this procedure work? As an intuitive explanation, notice that each row i is equally likely to be proposed and has probability proportional to \norm{a_i}^2 of being accepted. Therefore, under this procedure, each row i has probability proportional to \norm{a_i}^2 of being selected, just as desired for randomized Kaczmarz. If this intuitive justification is unsatisfying, we’ll have a formal derivation of correctness below (in a more general context).

I find rejection sampling to be super neat. In this case, it allows us to use simple ingredients (uniform row sampling and an upper bound on the row norms) to accomplish a more complicated task (sampling according to the squared row norms). We can sample from an interesting probability distribution (the squared row norm distribution) by thinning out samples from a simple distribution. This is a powerful idea that has many applications. (And, I believe, many more applications are waiting to be discovered.)

Rejection Sampling in General

Let’s now discuss rejection sampling in a bit more generality. Say I have a list of n things each of which has a weight w_i\ge 0. Our goal is to choose a random index I with probability proportional to w_i, i.e., \prob \{ I = i\} = w_i / \sum_{j=1}^n w_j. We will call w the target distribution. (Note that we do not require the weights w_i to be normalized to satisfy \sum_{i=1}^n w_i =1.)

Suppose that sampling from the target distribution is challenging, but we can sample from a proposal distribution \rho, i.e., we can efficiently generate random J such that \prob \{J = i\} = \rho_i / \sum_{j=1}^n \rho_j. Further, suppose that the proposal distribution dominates the target distribution in the sense that

    \[w_i \le \rho_i \quad \text{for each } i=1,\ldots,n.\]

Under this general setting, we can sample I from the target distribution using rejection sampling, similar to above:

  1. Propose. Draw a random row index J from the proposal distribution.
  2. Accept/reject. Make a random choice: With probability p_J = w_J/\rho_J, accept and set I\gets J. With the remaining probability 1-p_J, reject and return to step 1.

We recognize the squared row-norm sampling above as an example of this general strategy with w_i = \norm{a_i}^2 and \rho_i = B for all i.

Let us now convince ourselves more carefully that rejection sampling works. On any given execution of the rejection sampling loop, the probability of proposing index i is \rho_i / \sum_{j=1}^n \rho_j, after which the probability of accepting i is w_i / \rho_i. Therefore, the probability of outputting i at any given loop is

    \[\prob \{\text{$i$ accepted this loop}\} = \frac{\rho_i}{\sum_{j=1}^n \rho_j} \cdot \frac{w_i}{\rho_i} = \frac{w_i}{\sum_{j=1}^n \rho_j}.\]

The probability that any index i on that loop is accepted is thus

    \[\prob \{\text{any accepted this loop}\} = \sum_{i=1}^n \prob \{\text{$i$ accepted this loop}\} = \frac{\sum_{i=1}^n w_i}{\sum_{i=1}^n \rho_i}.\]

The rejection sampling loop accepts with fixed positive probability each execution, so it eventually accepts with 100% probability. Thus, the probability of accepting i conditional on some index being accepted is

    \[\prob \{\text{$i$ accepted this loop} \mid \text{any accepted this loop}\} = \frac{\prob \{\text{$i$ accepted this loop}\}}{\prob \{\text{any accepted this loop}\}} = \frac{w_i}{\sum_{j=1}^n w_j}.\]

Therefore, rejection sampling outputs a sample I from the target distribution w as promised.

As this analysis shows, the probability of accepting on any given execution of the rejection sampling loop is \sum_{i=1}^n w_i / \sum_{i=1}^n \rho_i, the ratio of the total mass of the target w_i to the total mass of the proposal \rho_i. Thus, rejection sampling will have a high acceptance rate if w_i \approx \rho_i and a low acceptance rate if w_i \ll \rho_i. The conclusion for practice is that rejection sampling is only computationally efficient if one has access to a good proposal distribution \rho_i, that is both easy to sample from and close to the target w_i\approx \rho_i.

Here, we have presented rejection sampling as a strategy for sampling from a discrete set of items i=1,\ldots,n, but this not all that rejection sampling can do. Indeed, one can also use rejection sampling for sampling a real-valued parameters x \in \real or a multivariate parameter x \in \real^n from a given (unnormalized) probability density function (or, if one likes, an unnormalized probability measure).

Before moving on, let us make one final observation. A very convenient thing about rejection sampling is that we only need the unnormalized probability weights w_i and \rho_i to implement the procedure, even though the total weights \sum_{j=1}^n w_i and \sum_{j=1}^n \rho_i are necessary to define the sampling probabilities \prob \{I = i\} = w_i / \sum_{j=1}^n w_j and \prob \{J = i\} = \rho_i / \sum_{j=1}^n \rho_j. This fact is necessary for many applications of rejection sampling. For instance, in the randomized Kaczmarz use, rejection sampling would be much less useful if we needed the total weight \sum_{i=1}^n \norm{a_i}^2 = \norm{A}_{\rm F}^2, as computing \norm{A}_{\rm F}^2 requires a full pass over the data to compute.

Application: Randomly pivoted Cholesky

Another application of rejection sampling appears is to accelerate the randomly pivoted Cholesky algorithm, the subject of my recent paper. Here, I’ll provide an overview of the main idea with a focus on the role of rejection sampling in the procedure. See the paper for more details. Warning: Even as simplified here, this section presents a significantly more involved application of rejection sampling than we saw previously with randomized Kaczmarz!

Randomly pivoted Cholesky (RPCholesky) is a simple, but powerful algorithm for computing a low-rank approximation to a symmetric positive semidefinite matrix A, and it can be used to accelerate kernel machine learning algorithms.

Conceptually, RPCholesky works as follows. Let A^{(0)} \coloneqq A denote the initial matrix and \hat{A}^{(0)} \coloneqq 0 denote a trivial initial approximation. For j=0,1,2,\ldots,k-1, RPCholesky performs the following steps:

  1. Random sampling. Draw a random index I_{j+1} with probability \prob \{ I_{j+1} = i\} = A^{(j)}_{ii} / \sum_{i=1}^n A^{(j)}_{ii}.
  2. Update approximation. Update \hat{A}^{(j+1)} \gets \hat{A}^{(j)} + a_i^{(j)}(a_i^{(j)})^\top / A_{ii}^{(j)}. Here, a_i^{(j)} denotes the ith column of A^{(j)}.
  3. Update matrix. Update A^{(j+1)} \gets A^{(j)} + a_i^{(j)}(a_i^{(j)})^\top / A_{ii}^{(j)}.

The result of this procedure is a rank-k approximation \hat{A}^{(k)}. With an optimized implementation, RPCholesky requires only O(k^2N) operations and evaluates just (k+1)N entries of the input matrix A.

The standard RPCholesky algorithm is already fast, but there is room for improvement. The standard RPCholesky method processes the matrix A one column a_i at a time, but modern computers are faster at processing blocks of columns. Using the power of rejection sampling, we can develop faster versions of RPCholesky that take advantage of blocked computations. These blocked implementations are important: While we could handle a million-by-million matrix using the standard RPCholesky method, blocked implementations can be up to 40\times faster. In this post, I’ll describe the simplest way of using rejection sampling to implement RPCholesky.

To set the stage, let’s first establish some notation and make a few observations. Let S_j = \{I_1,\ldots,I_j\} denote the first j selected random indices. With a little matrix algebra, one can show that the approximation \hat{A}^{(j)} produced by j steps of RPCholesky obeys the formula1For those interested, observe that \hat{A}^{(j)} is an example of a Nyström approximation. The connection between Nyström approximation and Cholesky decomposition is explained in this previous post of mine.

(1)   \[\hat{A}^{(j)} = A(:,S_j) A(S_j,S_j)^{-1}A(S_j,:). \]

Here, we are using MATLAB notation for indexing the matrix A. The residual matrix A^{(j)} is given by

    \[A^{(j)} = A - \hat{A}^{(j)} = A - A(:,S_j) A(S_j,S_j)^{-1}A(S_j,:).\]

Importantly for us, observe that we can evaluate the ith diagonal entry of A^{(j)} using the formula

(2)   \[A^{(j)}_{ii} = A_{ii} - A(i,S_j) A(S_j,S_j)^{-1} A(S_j,i).\]

Compared to operating on the full input matrix A, evaluating an entry A^{(j)}_{ii} is cheap, just requiring some arithmetic involving matrices and vectors of size j.

Here, we see a situation similar to randomized Kaczmarz. At each step of RPCholesky, we must sample a random index I_{j+1} from a target distribution w_i = A_{ii}^{(j)}, but it is expensive to evaluate all of the w_i‘s. This is a natural setting for rejection sampling. As proposal distribution, we may use the initial diagonal of A, \rho_i = A_{ii}. (One may verify that, as required, w_i = A_{ii}^{(j)} \le A_{ii} = \rho_i for all i.) This leads to a rejection sampling implementation for RPCholesky:

  1. Sample indices. For j=0,1,\ldots,k-1, sample random indices I_{j+1} from the target distribution w_i = A^{(j)}_{ii} using rejection sampling with proposal distribution \rho_i = A_{ii}. Entries A_{ii}^{(j)} are evaluated on an as-needed basis using (2).
  2. Build the approximation. Form the approximation \hat{A}^{(k)} all at once using (1).

By using rejection sampling, we have gone from an algorithm which handles the matrix one column at a time to a new algorithm which processes columns of the matrix A all at once through the formula (1). In the right situations, this new algorithm can be significantly faster than the original RPCholesky method.2In its essence, this rejection sampling algorithm for RPCholesky was first proposed in this paper of mine, which uses rejection sampling to apply RPCholesky to infinite-dimensional positive definite kernel operators. The ability to handle infinite-dimensional sampling problems is another great strength of the rejection sampling approach.

We have now seen two extreme cases: the standard RPCholesky algorithm that processes the columns of A one at a time and a rejection sampling implementation that operates on the columns of A all at once. In practice, the fastest algorithm sits between these extremes, using rejection sampling to sample column indices in moderate-size blocks (say, between 100 and 1000 columns at a time). This “goldilocks” algorithm is the accelerated RPCholesky method that we introduce in our paper. Check it out for details!

Conclusion: Rejection Sampling, an Algorithm Design Tool

We’ve now seen two applications, randomized Kaczmarz and randomly pivoted Cholesky, where rejection sampling can be used to speed up an algorithm. In both cases, we wanted to sample from a distribution over n row or column indices, but it was expensive to perform a census to compute the probability weight of each item individually. Rejection sampling offers a different approach, where we generate samples from a cheap-to-sample proposal distribution and only evaluate the probability weights of the proposed items. In the right situations, this rejection sampling approach can lead to significant computational speedups.

Rejection sampling has been used by statistics-oriented folks for decades, but the power of this technique seems less well-known to researchers in applied math, scientific computation, and related areas. I think there are a lot more exciting ways that we can use rejection sampling to design better, faster randomized algorithms.

Neat Randomized Algorithms: RandDiag for Rapidly Diagonalizing Normal Matrices

Consider two complex-valued square matrices A\in\complex^{n\times n} and B\in\complex^{n\times n}. The first matrix A is Hermitian, being equal A = A^* to its conjugate transpose A^*. The other matrix B is non-Hermitian, B \ne B^*. Let’s see how long it takes to compute their eigenvalue decompositions in MATLAB:

>> A = randn(1e3) + 1i*randn(1e3); A = (A+A')/2;
>> tic; [V_A,D_A] = eig(A); toc % Hermitian matrix
Elapsed time is 0.415145 seconds.
>> B = randn(1e3) + 1i*randn(1e3);
>> tic; [V_B,D_B] = eig(B); toc % non-Hermitian matrix
Elapsed time is 1.668246 seconds.

We see that it takes 4\times longer to compute the eigenvalues of the non-Hermitian matrix B as compared to the Hermitian matrix A. Moreover, the matrix V_A of eigenvectors for a Hermitian matrix A = V_AD_AV_A^{-1} is a unitary matrix, V_A^*V_A = V_AV_A^* = I.

There are another class of matrices with nice eigenvalue decompositions, normal matrices. A square, complex-valued matrix C is normal if C^*C = CC^*. The matrix V_C of eigenvectors for a normal matrix C = V_C D_C V_C^{-1} is also unitary, V_C^*V_C = V_CV_C^* = I. An important class of normal matrices are unitary matrices themselves. A unitary matrix U is always normal since it satisfies U^*U = UU^* = I.

Let’s see how long it takes MATLAB to compute the eigenvalue decomposition of a unitary (and thus normal) matrix:

>> U = V_A;                     % unitary, and thus normal, matrix
>> tic; [V_U,D_U] = eig(U); toc % normal matrix
Elapsed time is 2.201017 seconds.

Even longer than it took to compute an eigenvalue decomposition of the non-normal matrix B! Can we make the normal eigenvalue decomposition closer to the speed of the Hermitian eigenvalue decomposition?

Here is the start of an idea. Every square matrix C has a Cartesian decomposition:

    \[C = H + iS, \quad H = \frac{C+C^*}{2}, \quad S = \frac{C-C^*}{2i}.\]

We have written C as a combination of its Hermitian part H and i times its skew-Hermitian part S. Both H and S are Hermitian matrices. The Cartesian decomposition of a square matrix is analogous to the decomposition of a complex number into its real and imaginary parts.

For a normal matrix C, the Hermitian and skew-Hermitian parts commute, HS = SH. We know from matrix theory that commuting Hermitian matrices are simultaneously diagonalizable, i.e., there exists Q such that H = QD_HQ^* and S = QD_SQ^* for diagonal matrices D_H and D_S. Thus, given access to such Q, C has eigenvalue decomposition

    \[C = Q(D_H+iD_S)Q^*.\]

Here’s a first attempt to turn this insight into an algorithm. First, compute the Hermitian part H of C, diagonalize H = QD_HQ^*, and then see if Q diagonalizes C. Let’s test this out on a 2\times 2 example:

>> C = orth(randn(2) + 1i*randn(2)); % unitary matrix
>> H = (C+C')/2;                     % Hermitian part
>> [Q,~] = eig(H);
>> Q'*C*Q                            % check to see if diagonal
ans =
  -0.9933 + 0.1152i  -0.0000 + 0.0000i
   0.0000 + 0.0000i  -0.3175 - 0.9483i

Yay! We’ve succeeded at diagonalizing the matrix C using only a Hermitian eigenvalue decomposition. But we should be careful about declaring victory too early. Here’s a bad example:

>> C = [1 1i;1i 1]; % normal matrix
>> H = (C+C')/2;
>> [Q,~] = eig(H);
>> Q'*C*Q           % oh no! not diagonal
ans =
   1.0000 + 0.0000i   0.0000 + 1.0000i
   0.0000 + 1.0000i   1.0000 + 0.0000i

What’s going on here? The issue is that the Hermitian part H = I for this matrix has a repeated eigenvalue. Thus, H has multiple different valid matrices of eigenvectors. (In this specific case, every unitary matrix Q diagonalizes H.) By looking at H alone, we don’t know which Q matrix to pick which also diagonalizes S.

He and Kressner developed a beautifully simple randomized algorithm called RandDiag to circumvent this failure mode. The idea is straightforward:

  1. Form a random linear combination M = \gamma_1 H + \gamma_2 S of the Hermitian and skew-Hermitian parts of A, with standard normal random coefficients \gamma_1 and \gamma_2.
  2. Compute Q that diagonalizes M.

That’s it!

To get a sense of why He and Kressner’s algorithm works, suppose that H has some repeated eigenvalues and S has all distinct eigenvalues. Given this setup, it seems likely that a random linear combination of S and H will also have all distinct eigenvalues. (It would take a very special circumstances for a random linear combination to yield two eigenvalues that are exactly the same!) Indeed, this intuition is a fact: With 100% probability, Q diagonalizing a Gaussian random linear combination of simultaneously diagonalizable matrices H and S also diagonalizes H and S individually.

MATLAB code for RandDiag is as follows:

function Q = rand_diag(C)
   H = (C+C')/2; S = (C-C')/2i;
   M = randn*H + randn*S;
   [Q,~] = eig(M);
end

When applied to our hard 2\times 2 example from before, RandDiag succeeds at giving a matrix that diagonalizes C:

>> Q = rand_diag(C);
>> Q'*C*Q
ans =
   1.0000 - 1.0000i  -0.0000 + 0.0000i
  -0.0000 - 0.0000i   1.0000 + 1.0000i

For computing the matrix of eigenvectors for a 1000\times 1000 unitary matrix, RandDiag takes 0.4 seconds, just as fast as the Hermitian eigendecomposition did.

>> tic; V_U = rand_diag(U); toc
Elapsed time is 0.437309 seconds.

He and Kressner’s algorithm is delightful. Ultimately, it uses randomness in only a small way. For most coefficients a_1,a_2 \in \real, a matrix Q diagonalizing a_1 H + a_2 S will also diagonalize A = H+iS. But, for any specific choice of a_1,a_2, there is a possibility of failure. To avoid this possibility, we can just pick a_1 and a_2 at random. It’s really as simple as that.

References: RandDiag was proposed in A simple, randomized algorithm for diagonalizing normal matrices by He and Kressner (2024), building on their earlier work in Randomized Joint Diagonalization of Symmetric Matrices (2022) which considers the general case of using random linear combinations to (approximately) simultaneous diagonalize (nearly) commuting matrices. RandDiag is an example of a linear algebraic algorithm that uses randomness to put the input into “general position”; see Randomized matrix computations: Themes and variations by Kireeva and Tropp (2024) for a discussion of this, and other, ways of using randomness to design matrix algorithms.

Neat Randomized Algorithms: Randomized Cholesky QR

As a research area, randomized numerical linear algebra (RNLA) is as hot as ever. To celebrate the exciting work in this space, I’m starting a new series on my blog where I celebrate cool recent algorithms in the area. At some future point, I might talk about my own work in this series, but for now I’m hoping to use this series to highlight some of the awesome work being done by my colleagues.


Given a tall matrix A \in \real^{m\times n} with m\ge n, its (economy-size) QR factorization is a decomposition of the form A = QR, where Q \in \real^{m\times n} is a matrix with orthonormal columns and R \in \real^{n\times n} is upper triangular. QR factorizations are used to solve least-squares problems and as a computational procedure to orthonormalize the columns of a matrix.

Here’s an example in MATLAB, where we use QR factorization to orthonormalize the columns of a 10^6\times 10^2 test matrix. It takes about 2.5 seconds to run.

>> A = randn(1e6, 1e2) * randn(1e2) * randn(1e2); % test matrix
>> tic; [Q,R] = qr(A,"econ"); toc
Elapsed time is 2.647317 seconds.

The classical algorithm for computing a QR factorization uses Householder reflectors and is exceptionally numerically stable. Since Q has orthonormal columns, Q^\top Q = I is the identity matrix. Indeed, this relation holds up to a tiny error for the Q computed by Householder QR:

>> norm(Q'*Q - eye(1e2)) % || Q^T Q - I ||
ans =
   7.0396e-14

The relative error \|A - QR\|/\|A\| is also small:

>> norm(A - Q*R) / norm(A)
ans =
   4.8981e-14

Here is an alternate procedure for computing a QR factorization, known as Cholesky QR:

function [Q,R] = cholesky_qr(A)
   R = chol(A'*A);
   Q = A / R;       % Q = A * R^{-1}
end

This algorithm works by forming A^\top A, computing its (upper triangular) Cholesky decomposition A^\top A = R^\top R, and setting Q = AR^{-1}. Cholesky QR is very fast, about 5\times faster than Householder QR for this example:

>> tic; [Q,R] = cholesky_qr(A); toc
Elapsed time is 0.536694 seconds.

Unfortunately, Cholesky QR is much less accurate and numerically stable than Householder QR. Here, for instance, is the value of \|Q^\top Q - I\|, about ten million times larger than for Householder QR!:

>> norm(Q'*Q - eye(1e2))
ans =
   7.5929e-07

What’s going on? As we’ve discussed before on this blog, forming A^\top A is typically problematic in linear algebraic computations. The “badness” of a matrix A is measured by its condition number, defined to be the ratio of its largest and smallest singular values \kappa(A) = \sigma_{\rm max}(A)/\sigma_{\rm min}(A). The condition number of A^\top A is the square of the condition number of A, \kappa(A^\top A) = \kappa(A)^2, which is at the root of Cholesky QR’s loss of accuracy. Thus, Cholesky QR is only appropriate for matrices that are well-conditioned, having a small condition number \kappa(A) \approx 1, say \kappa(A) \le 10.

The idea of randomized Cholesky QR is to use randomness to precondition A, producing a matrix R_1 that B = AR_1^{-1} is well-conditioned. Then, since B is well-conditioned, we can apply ordinary Cholesky QR to it without issue. Here are the steps:

  1. Draw a sketching matrix S of size 2n\times m; see these posts of mine for an introduction to sketching.
  2. Form the sketch SA. This step compresses the very tall matrix m\times n to the much shorter matrix SA of size 2n\times n.
  3. Compute a QR factorization SA = Q_1R_1 using Householder QR. Since the matrix SA is small, this factorization will be quick to compute.
  4. Form the preconditioned matrix B = AR_1^{-1}.
  5. Apply Cholesky QR to B to compute B = QR_2.
  6. Set R = R_2R_1. Observe that A = BR_1 = QR_2R_1 = QR, as desired.

MATLAB code for randomized Cholesky QR is provided below:1Code for the sparsesign subroutine can be found here.

function [Q,R] = rand_cholesky_qr(A)
   S = sparsesign(2*size(A,2),size(A,1),8); % sparse sign embedding
   R1 = qr(S*A,"econ"); % sketch and (Householder) QR factorize
   B = A / R1; % B = A * R_1^{-1}
   [Q,R2] = cholesky_qr(B);
   R = R2*R1;
end

Randomized Cholesky QR is still faster than ordinary Householder QR, about 3\times faster in our experiment:

>> tic; [Q,R] = rand_cholesky_qr(A); toc
Elapsed time is 0.920787 seconds.

Randomized Cholesky QR greatly improves on ordinary Cholesky QR in terms of accuracy and numerical stability. In fact, the size of \|Q^\top Q - I\| is even smaller than for Householder QR!

>> norm(Q'*Q - eye(1e2))
ans =
   1.0926e-14

The relative error \|A - QR\|/\|A\| is small, too! Even smaller than for Householder QR in fact:

>> norm(A - Q*R) / norm(A)
ans =
   4.0007e-16

Like many great ideas, randomized Cholesky QR was developed independently by a number of research groups. A version of this algorithm was first introduced in 2021 by Fan, Guo, and Lin. Similar algorithms were investigated in 2022 and 2023 by Balabanov, Higgins et al., and Melnichenko et al. Check out Melnichenko et al.‘s paper in particular, which shows very impressive results for using randomized Cholesky QR to compute column pivoted QR factorizations.

References: Primary references are A Novel Randomized XR-Based Preconditioned CholeskyQR Algorithm by Fan, Guo, and Lin (2021); Randomized Cholesky QR factorizations by Balabanov (2022); Analysis of Randomized Householder-Cholesky QR Factorization with Multisketching by Higgins et al. (2023); CholeskyQR with Randomization and Pivoting for Tall Matrices (CQRRPT) by Melnichenko et al. (2023). The idea of using sketching to precondition tall matrices originates in the paper A fast randomized algorithm for overdetermined linear least-squares regression by Rokhlin and Tygert (2008).

Don’t Use Gaussians in Stochastic Trace Estimation

Suppose we are interested in estimating the trace \tr(A) = \sum_{i=1}^n A_{ii} of an n\times n matrix A that can be only accessed through matrix–vector products Ax_1,\ldots,Ax_m. The classical method for this purpose is the GirardHutchinson estimator

    \[\hat{\tr} = \frac{1}{m} \left( x_1^\top Ax_1 + \cdots + x_m^\top Ax_m \right),\]

where the vectors x_1,\ldots,x_m are independent, identically distributed (iid) random vectors satisfying the isotropy condition

    \[\expect[x_ix_i^\top] = I.\]

Examples of vectors satisfying this condition include

Stochastic trace estimation has a number of applications: log-determinant computations in machine learningpartition function calculations in statistical physicsgeneralized cross validation for smoothing splines, and triangle counting in large networks. Several improvements to the basic Girard–Hutchinson estimator have been developed recently. I am partial to XTrace, an improved trace estimator that I developed with my collaborators.

This post is addressed at the question:

Which distribution should be used for the test vectors x_i for stochastic trace estimation?

Since the Girard–Hutchinson estimator is unbiased \expect[\hat{\tr}] = \tr(A), the variance of \hat{\tr} is equal to the mean-square error. Thus, the lowest variance trace estimate is the most accurate. In my previous post on trace estimation, I discussed formulas for the variance \Var(\hat{\tr}) of the Girard–Hutchinson estimator with different choices of test vectors. In that post, I stated the formulas for different choices of test vectors (Gaussian, random signs, sphere) and showed how those formulas could be proven.

In this post, I will take the opportunity to editorialize on which distribution to pick. The thesis of this post is as follows:

The sphere distribution is essentially always preferable to the Gaussian distribution for trace estimation.

To explain why, let’s focus on the case when A is real and symmetric.1The same principles hold in the general case, but the variance formulas are more delicate to state. See my previous post for the formulas. Let \lambda_1,\ldots,\lambda_n be the eigenvalues of A and define the eigenvalue mean

    \[\overline{\lambda} = \frac{\lambda_1 + \cdots + \lambda_n}{n}.\]

Then the variance of the Girard–Hutchinson estimator with Gaussian vectors x_i is

    \[\Var(\hat{\tr}_{\rm Gaussian}) = \frac{1}{m} \cdot 2 \sum_{i=1}^n \lambda_i^2.\]

For vectors x_i drawn from the sphere, we have

    \[\Var(\hat{\tr}_{\rm sphere}) = \frac{1}{m} \cdot \frac{n}{n+2} \cdot 2\sum_{i=1}^n (\lambda_i - \overline{\lambda})^2.\]

The sphere distribution improves on the Gaussian distribution in two ways. First, the variance of \Var(\hat{\tr}_{\rm sphere}) is smaller than \Var(\hat{\tr}_{\rm Gaussian})by a factor of n/(n+2) < 1. This improvement is quite minor. Second, and more importantly, \Var(\hat{\tr}_{\rm Gaussian}) is proportional to the sum of A‘s squared eigenvalues whereas \Var(\hat{\tr}_{\rm sphere}) is proportional to the sum of A‘s squared eigenvalues after having been shifted to be mean-zero!

The difference between Gaussian and sphere test vectors can be large. To see this, consider a 1000\times 1000 matrix A with eigenvalues uniformly distributed between 0.9 and 1.1 with a (Haar orthgonal) random matrix of eigenvectors. For simplicity, since the variance of all Girard–Hutchinson estimates is proportional to 1/m, we take m=1. Below show the variance of Girard–Hutchinson estimator for different distributions for the test vector. We see that the sphere distribution leads to a trace estimate which has a variance 300× smaller than the Gaussian distribution. For this example, the sphere and random sign distributions are similar.

DistributionVariance (divided by \tr(A)^2)
Gaussian2.0\times 10^{-3}
Sphere6.7\times 10^{-6}
Random signs6.7\times 10^{-6}

Which Distribution Should You Use: Signs vs. Sphere

The main point of this post is to argue against using the Gaussian distribution. But which distribution should you use: Random signs? The sphere distribution? The answer, for most applications, is one of those two, but exactly which depends on the properties of the matrix A.

The variance of the Girard–Hutchinson estimator with the random signs estimator is

    \[\Var(\hat{\tr}_{\rm signs}) = 2 \sum_{i\ne j} A_{ij}^2.\]

Thus, \Var(\hat{\tr}_{\rm signs}) depends on the size of the off-diagonal entries of A; \Var(\hat{\tr}_{\rm signs}) does not depend on the diagonal of A at all! For matrices with small off-diagonal entries (such as diagonally dominant matrices), the random signs distribution is often the best.

However, for other problems, the sphere distribution is preferable to random signs. The sphere distribution is rotation-invariant, so \Var(\hat{\tr}_{\rm sphere}) is independent of the eigenvectors of the (symmetric) matrix A, depending only on A‘s eigenvalues. By contrast, the variance of the Girard–Hutchinson estimator with the random signs distribution can significantly depend on the eigenvectors of the matrix A. For a given set of eigenvalues and the worst-case choice of eigenvectors, \Var(\hat{\tr}_{\rm sphere}) will always be smaller than \Var(\hat{\tr}_{\rm signs}). In fact, \Var(\hat{\tr}_{\rm sphere}) is the minimum variance distribution for Girard–Hutchinson trace estimation for a matrix with fixed eigenvalues and worst-case eigenvectors; see this section of my previous post for details.

In my experience, random signs and the sphere distribution are both perfectly adequate for trace estimation and either is a sensible default if you’re developing software. The Gaussian distribution on the other hand… don’t use it unless you have a good reason to.

How Good Can Stochastic Trace Estimates Be?

I am excited to share that our paper XTrace: Making the most of every sample in stochastic trace estimation has been published in the SIAM Journal on Matrix Analysis and Applications. (See also our paper on arXiv.)

Spurred by this exciting news, I wanted to take the opportunity to share one of my favorite results in randomized numerical linear algebra: a “speed limit” result of Meyer, Musco, Musco, and Woodruff that establishes a fundamental limitation on how accurate any trace estimation algorithm can be.

Let’s back up. Given an unknown square matrix A, the trace of A, defined to be the sum of its diagonal entries

    \[\tr(A) \coloneqq \sum_{i=1}^n A_{ii}.\]

The catch? We assume that we can only access the matrix A through matrix–vector products (affectionately known as “matvecs”): Given any vector x, we have access to Ax. Our goal is to form an estimate \hat{\tr} that is as accurate as possible while using as few matvecs as we can get away with.

To simplify things, let’s assume the matrix A is symmetric and positive (semi)definite. The classical algorithm for trace estimation is due to Girard and Hutchinson, producing a probabilistic estimate \hat{\tr} with a small average (relative) error:

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^2} \text{ matvecs}.\]

If one wants high accuracy, this algorithm is expensive. To achieve just a 1% error (\varepsilon=0.01) requires roughly m=10,\!000 matvecs!

This state of affairs was greatly improved by Meyer, Musco, Musco, and Woodruff. Building upon previous work, they proposed the Hutch++ algorithm and proved it outputs an estimate \hat{\tr} satisfying the following bound:

(1)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon} \text{ matvecs}.\]

Now, we only require roughly m=100 matvecs to achieve 1% error! Our algorithm, XTrace, satisfies the same error guarantee (1) as Hutch++. On certain problems, XTrace can be quite a bit more accurate than Hutch++.

The MMMW Trace Estimation “Speed Limit”

Given the dramatic improvement of Hutch++ and XTrace over Girard–Hutchinson, it is natural to hope: Is there an algorithm that does even better than Hutch++ and XTrace? For instance, is there an algorithm satisfying an even slightly better error bound of the form

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}?\]

Unfortunately not. Hutch++ and XTrace are essentially as good as it gets.

Let’s add some fine print. Consider an algorithm for the trace estimation problem. Whenever the algorithm wants, it can present a vector x_i and receive back Ax_i. The algorithm is allowed to be adaptive: It can use the matvecs Ax_1,\ldots,Ax_s it has already collected to decide which vector x_{s+1} to present next. We measure the cost of the algorithm in terms of the number of matvecs alone, and the algorithm knows nothing about the psd matrix A other what it learns from matvecs.

One final stipulation:

Simple entries assumption. We assume that the entries of the vectors x_i presented by the algorithm are real numbers between -1 and 1 with up to b digits after the decimal place.

To get a feel for this simple entries assumption, suppose we set b=2. Then (-0.92,0.17) would be an allowed input vector, but (0.232,-0.125) would not be (too many digits after the decimal place). Similarly, (18.3,2.4) would not be valid because its entries exceed 1. The simple entries assumption is reasonable as we typically represent numbers on digital computers by storing a fixed number of digits of accuracy.1We typically represent numbers on digital computers by floating point numbers, which essentially represent numbers using scientific notation like 1.3278123 \times 10^{27}. For this analysis of trace estimation, we use fixed point numbers like 0.23218 (no powers of ten allowed)!

With all these stipulations, we are ready to state the “speed limit” for trace estimation proved by Meyer, Musco, Musco, and Woodruff:

Informal theorem (Meyer, Musco, Musco, Woodruff). Under the assumptions above, there is no trace estimation algorithm producing an estimate \hat{\tr} satisfying

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}.\]

We will see a slightly sharper version of the theorem below, but this statement captures the essence of the result.

Communication Complexity

To prove the MMMW theorem, we have to take a journey to the beautiful subject of communication complexity. The story is this. Alice and Bob are interested in solving a computational problem together. Alice has her input x and Bob has his input y, and they are interested in computing a function f(x,y) of both their inputs.

Unfortunately for the two of them, Alice and Bob are separated by a great distance, and can only communicate by sending single bits (0 or 1) of information over a slow network connection. Every bit of communication is costly. The field of communication complexity is dedicated to determining how efficiently Alice and Bob are able to solve problems of this form.

The Gap-Hamming problem is one example of a problem studied in communication complexity. As inputs, Alice and Bob receive vectors x,y \in \{\pm 1\}^n with +1 and -1 entries from a third party Eve. Eve promises Alice and Bob that their vectors x and y satisfy one of two conditions:

(2)   \[\text{Case 0: } x^\top y \ge\sqrt{n} \quad \text{or} \quad \text{Case 1: } x^\top y \le -\sqrt{n}. \]

Alice and Bob must work together, sending as few bits of communication as possible, to determine which case they are in.

There’s one simple solution to this problem: First, Bob sends his whole input vector y to Alice. Each entry of y takes one of the two value \pm 1 and can therefore be communicated in a single bit. Having received y, Alice computes x^\top y, determines whether they are in case 0 or case 1, and sends Bob a single bit to communicate the answer. This procedure requires n+1 bits of communication.

Can Alice and Bob still solve this problem with many fewer than n bits of communication, say \sqrt{n} bits? Unfortunately not. The following theorem of Chakrabati and Regev shows that roughly n bits of communication are needed to solve this problem:

Theorem (Chakrabati–Regev). Any algorithm which solves the Gap-Hamming problem that succeeds with at least 2/3 probability for every pair of inputs x and y (satisfying one of the conditions (2)) must take \Omega(n) bits of communication.

Here, \Omega(n) is big-Omega notation, closely related to big-O notation \order(n) and big-Theta notation \Theta(n). For the less familiar, it can be helpful to interpret \Omega(n), \order(n), and \Theta(n) as all standing for “proportional to n”. In plain language, the theorem of Chakrabati and Regev result states that there is no algorithm for the Gap-Hamming problem that much more effective than the basic algorithm where Bob sends his whole input to Alice (in the sense of requiring less than \order(n) bits of communication).

Reducing Gap-Hamming to Trace Estimation

This whole state of affairs is very sad for Alice and Bob, but what does it have to do with trace estimation? Remarkably, we can use hardness of the Gap-Hamming problem to show there’s no algorithm that fundamentally improves on Hutch++ and XTrace. The argument goes something like this:

  1. If there were a trace estimation algorithm fundamentally better than Hutch++ and XTrace, we could use it to solve Gap-Hamming in fewer than \order(n) bits of communication.
  2. But no algorithm can solve Gap-Hamming in fewer than \order(n) bits or communication.
  3. Therefore, no trace estimation algorithm is fundamentally better than Hutch++ and XTrace.

Step 2 is the work of Chakrabati and Regev, and step 3 follows logically from 1 and 2. Therefore, we are left to complete step 1 of the argument.

Protocol

Assume we have access to a really good trace estimation algorithm. We will use it to solve the Gap-Hamming problem. For simplicity, assume n is a perfect square. The basic idea is this:

  • Have Alice and Bob reshape their inputs x,y \in \{\pm 1\}^n into matrices X,Y\in\{\pm 1\}^{\sqrt{n}\times \sqrt{n}}, and consider (but do not form!) the positive semidefinite matrix

        \[A = (X+Y)^\top (X+Y).\]

  • Observe that

        \[\tr(A) = \tr(X^\top X) + 2\tr(X^\top Y) + \tr(Y^\top Y) = 2n + 2(x^\top y).\]

    Thus, the two cases in (2) can be equivalently written in terms of \tr(A):

    (2′)   \[\text{Case 0: } \tr(A)\ge 2n + 2\sqrt{n} \quad \text{or} \quad \text{Case 1: } \tr(A) \le 2n-2\sqrt{n}. \]

  • By working together, Alice and Bob can implement a trace estimation algorithm. Alice will be in charge of running the algorithm, but Alice and Bob must work together to compute matvecs. (Details below!)
  • Using the output of the trace estimation algorithm, Alice determines whether they are in case 0 or 1 (i.e., where \tr(A) \gg 2n or \tr(A) \ll 2n) and sends the result to Bob.

To complete this procedure, we just need to show how Alice and Bob can implement the matvec procedure using minimal communication. Suppose Alice and Bob want to compute Az for some vector z with entries between -1 and 1 with up to b decimal digits. First, convert z to a vector w\coloneqq 10^b z whose entries are integers between -10^b and 10^b. Since Az = 10^{-b}Aw, interconverting between Az and Aw is trivial. Alice and Bob’s procedure for computing Aw is as follows:

  • Alice sends Bob w.
  • Having received w, Bob forms Yw and sends it to Alice.
  • Having received Yw, Alice computes v\coloneqq Xw+Yw and sends it to Bob.
  • Having received v, Bob computes Y^\top v and sends its to Alice.
  • Alice forms Aw = X^\top v + Y^\top v.

Because X and Y are \sqrt{n}\times \sqrt{n} and have \pm 1 entries, all vectors computed in this procedure are vectors of length \sqrt{n} with integer entries between -4n 10^b and 4n10^b. We conclude the communication cost for one matvec is T\coloneqq\Theta((b+\log n)\sqrt{n}) bits.

Analysis

Consider an algorithm we’ll call BestTraceAlgorithm. Given any accuracy parameter \varepsilon > 0, BestTraceAlgorithm requires at most m = m(\varepsilon) matvecs and, for any positive semidefinite input matrix A of any size, produces an estimate \hat{\tr} satisfying

(3)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon.\]

We assume that BestTraceAlgorithm is the best possible algorithm in the sense that no algorithm can achieve (3) on all (positive semidefinite) inputs with m' < m matvecs.

To solve the Gap-Hamming problem, Alice and Bob just need enough accuracy in their trace estimation to distinguish between cases 0 and 1. In particular, if

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}},\]

then Alice and Bob can distinguish between cases 0 and 1 in (2′)

Suppose that Alice and Bob apply trace estimation to solve the Gap-Hamming problem, using m matvecs in total. The total communication is m\cdot T = \order(m(b+\log n)\sqrt{n}) bits. Chakrabati and Regev showed that Gap-Hamming requires cn bits of communication (for some c>0) to solve the Gap-Hamming problem with 2/3 probability. Thus, if m\cdot T < cn, then Alice and Bob fail to solve the Gap-Hamming problem with at least 1/3 probability. Thus,

    \[\text{If } m < \frac{cn}{T} = \Theta\left( \frac{\sqrt{n}}{b+\log n} \right), \quad \text{then } \left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| > \frac{1}{\sqrt{n}} \text{ with probability at least } \frac{1}{3}.\]

The contrapositive of this statement is that if

    \[\text{If }\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}}\text{ with probability at least } \frac{2}{3}, \quad \text{then } m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right).\]


Say Alice and Bob run BestTraceAlgorithm with parameter \varepsilon = \tfrac{1}{3\sqrt{n}}. Then, by (3) and Markov’s inequality,

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}} \quad \text{with probability at least }\frac{2}{3}.\]

Therefore, BestTraceAlgorithm requires at least

    \[m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right) \text{ matvecs}.\]

Using the fact that we’ve set \varepsilon = 1/3\sqrt{n}, we conclude that any trace estimation algorithm, even BestTraceAlgorithm, requires

    \[m \ge \Theta \left( \frac{1}{\varepsilon (b+\log(1/\varepsilon))} \right) \text{ matvecs}.\]

In particular, no trace estimation algorithm can achieve mean relative error \varepsilon using even \order(1/\varepsilon^{0.999}) matvecs. This proves the MMMW theorem.