Which Sketch Should I Use?

This is the second of a sequence of two posts on sketching, which I’m doing on the occasion of my new paper on the numerical stability of the iterative sketching method. For more on what sketching is and how it can be used to solve computational problems, I encourage you to check out the first post.

The goals of this post are more narrow. I seek to answer the question:

Which sketching matrix should I use?

To cut to the chase, my answer to this question is:

Sparse sign embeddings are a sensible default for sketching.

There are certainly cases when sparse sign embeddings are not the best type of sketch to use, but I hope to convince you that they’re a good sketching matrix to use for most purposes.

Experiments

Let’s start things off with some numerical experiments.1Code for all numerical experiments can be found on the blogpost branch of the Github for my recent paper. We’ll compare three types of sketching matrices: Gaussian embeddings, a subsampled randomized trigonometric transform (SRTT), and sparse sign embeddings. See the last post for descriptions of these sketching matrices. I’ll discuss a few additional types of sketching matrices that require more discussion at the end of this post.

Recall that a sketching matrix S \in \real^{d\times n} seeks to compress a high-dimensional matrix A \in \real^{n\times k} or vector b\in\real^n to a lower-dimensional sketched matrix SA or vector Sb. The quality of a sketching matrix for a matrix A is measured by its distortion \varepsilon, defined to be the smallest number \varepsilon > 0 such that

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for every } x \in \operatorname{col}(A).\]

Here, \operatorname{col}(A) denotes the column space of the matrix A.

Timing

We begin with timing test. We will measure three different times for each embedding:

  1. Construction. The time required to generate the sketching matrix S.
  2. Vector apply. The time to apply the sketch to a single vector.
  3. Matrix apply. The time to apply the sketch to an n\times 200 matrix.

We will test with input dimension n = 10^6 (one million) and output dimension d = 400. For the SRTT, we use the discrete cosine transform as our trigonometric transform. For the sparse sign embedding, we use a sparsity parameter \zeta = 8.

Here are the results (timings averaged over 20 trials):

Our conclusions are as follows:

  • Sparse sign embeddings are definitively the fastest to apply, being 3–20× faster than the SRTT and 74–100× faster than Gaussian embeddings.
  • Sparse sign embeddings are modestly slower to construct than the SRTT, but much faster to construct than Gaussian embeddings.

Overall, the conclusion is that sparse sign embeddings are the fastest sketching matrices by a wide margin: For an “end-to-end” workflow involving generating the sketching matrix S \in \real^{400\times 10^6} and applying it to a matrix A\in\real^{10^6\times 200}, sparse sign embeddings are 14× faster than SRTTs and 73× faster than Gaussian embeddings.2More timings are reported in Table 1 of this paper, which I credit for inspiring my enthusiasm for the sparse sign embedding l.

Distortion

Runtime is only one measure of the quality of a sketching matrix; we also must care about the distortion \varepsilon. Fortunately, for practical purposes, Gaussian embeddings, SRTTs, and sparse sign embeddings all tend to have similar distortions. Therefore, we are free to use the sparse sign embeddings, as they as typically are the fastest.

Consider the following test. We generate a sparse random test matrix A of size n\times k for n = 10^5 and k = 50 using the MATLAB sprand function; we set the sparsity level to 1%. We then compare the distortion of Gaussian embeddings, SRTTs, and sparse sign embeddings across a range of sketching dimensions d between 100 and 10,000. We report the distortion averaged over 100 trials. The theoretically predicted value \varepsilon = \sqrt{k/d} (equivalently, d=k/\varepsilon^2) is shown as a dashed line.

To me, I find these results remarkable. All three embeddings exhibit essentially the same distortion parameter predicted by the Gaussian theory.

It would be premature to declare success having only tested on one type of test matrix A. Consider the following four test matrices:

  • Sparse: The test matrix from above.
  • Dense: A\in\real^{10^6\times 50} is taken to be a matrix with independent standard Gaussian random values.
  • Khatri–Rao: A\in\real^{50^3\times 50} is taken to be the Khatri–Rao product of three Haar random orthogonal matrices.
  • Identity: A = \twobyone{I}{0} \in\real^{10^6\times 50} is taken to be the 50\times 50 identity matrix stacked onto a (10^6-50)\times 50 matrix of zeros.

The performance of sparse sign embeddings (again with sparsity parameter \zeta = 8) is shown below:

We see that for the first three test matrices, the performance closely follows the expected value \epsilon = \sqrt{k/d}. However, for the last test matrix “Identity”, we see the distortion begins to slightly exceed this predicted distortion for d/k\ge 20.

To improve sparse sign embeddings for higher values of d/k, we can increase the value of the sparsity parameter \zeta. We recommend

    \[\zeta = \max \left( 8 , \left\lceil 2\sqrt{\frac{d}{k}} \right\rceil \right).\]

With this higher sparsity level, the distortion closely tracks \varepsilon = \sqrt{k/d} for all four test matrices:

Conclusion

Implemented appropriately (see below), sparse sign embeddings can be faster than other sketching matrices by a wide margin. The parameter choice \zeta = 8 is enough to ensure the distortion closely tracks \varepsilon = \sqrt{k/d} for most test matrices. For the toughest test matrices, a slightly larger sparsity parameter \zeta = \max(8, \lceil 2\sqrt{d/k}\rceil) can be necessary to achieve the optimal distortion.

While these tests are far from comprehensive, they are consistent with the uniformly positive results for sparse sign embeddings reported in the literature. We believe that this evidence supports the argument that sparse sign embeddings are a sensible default sketching matrix for most purposes.

Sparse Sign Embeddings: Theory and Practice

Given the highly appealing performance characteristics of sparse sign embeddings, it is worth saying a few more words about these embeddings and how they perform in both theory and practice.

Recall that a sparse sign embedding is a random matrix of the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & \cdots & s_n \end{bmatrix}.\]

Each column s_i is an independent and randomly generated to contain exactly \zeta nonzero entries in uniformly random positions. The value of each nonzero entry of s_i is chosen to be either +1 or -1 with 50/50 odds.

Parameter Choices

The goal of sketching is to reduce vectors of length n to a smaller dimension d. For linear algebra applications, we typically want to preserve all vectors in the column space of a matrix A up to distortion \varepsilon > 0:

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for all }x \in \operatorname{col}(A).\]

To use sparse sign embeddings, we must choose the parameters appropriately:

Given a dimension k and a target distortion \varepsilon, how do we pick d and \zeta?

Based on the experiments above (and other testing reported in the literature), we recommend the following parameter choices in practice:

    \[d = \frac{k}{\varepsilon^2} \quad \text{and} \quad \zeta = \max\left(8,\frac{2}{\varepsilon}\right).\]

The parameter choice \zeta = 8 is advocated by Tropp, Yurtever, Udell, and Cevher; they mention experimenting with parameter values as small as \zeta = 2. The value \zeta = 1 has demonstrated deficiencies and should almost always be avoided (see below). The scaling d \approx k/\varepsilon^2 is derived from the analysis of Gaussian embeddings. As Martinsson and Tropp argue, the analysis of Gaussian embeddings tends to be reasonably descriptive of other well-designed random embeddings.

The best-known theoretical analysis, due to Cohen, suggests more cautious parameter setting for sparse sign embeddings:

    \[d = \mathcal{O} \left( \frac{k \log k}{\varepsilon^2} \right) \quad \text{and} \quad \zeta = \mathcal{O}\left( \frac{\log k}{\varepsilon} \right).\]

The main difference between Cohen’s analysis and the parameter recommendations above is the presence of the \log k factor and the lack of explicit constants in the O-notation.

Implementation

For good performance, it is imperative to store S using either a purpose-built data structure or a sparse matrix format (such as a MATLAB sparse matrix or scipy sparse array).

If a sparse matrix library is unavailable, then either pursue a dedicated implementation or use a different type of embedding; sparse sign embeddings are just as slow as Gaussian embeddings if they are stored in an ordinary non-sparse matrix format!

Even with a sparse matrix format, it can require care to generate and populate the random entries of the matrix S. Here, for instance, is a simple function for generating a sparse sign matrix in MATLAB:

function S = sparsesign_slow(d,n,zeta)
cols = kron((1:n)',ones(zeta,1)); % zeta nonzeros per column
vals = 2*randi(2,n*zeta,1) - 3; % uniform random +/-1 values
rows = zeros(n*zeta,1);
for i = 1:n
   rows((i-1)*zeta+1:i*zeta) = randsample(d,zeta);
end
S = sparse(rows, cols, vals / sqrt(zeta), d, n);
end

Here, we specify the rows, columns, and values of the nonzero entries before assembling them into a sparse matrix using the MATLAB sparse command. Since there are exactly \zeta nonzeros per column, the column indices are easy to generate. The values are uniformly \pm 1/\sqrt{\zeta} and can also be generated using a single line. The real challenge to generating sparse sign embeddings in MATLAB is the row indices, since each batch of \zeta row indices much be chosen uniformly at random between 1 and d without replacement. This is accomplished in the above code by a for loop, generating row indices \zeta at a time using the slow randsample function.

As its name suggests, the sparsesign_slow is very slow. To generate a 200\times 10^7 sparse sign embedding with sparsity \zeta = 8 requires 53 seconds!

Fortunately, we can do (much) better. By rewriting the code in C and directly generating the sparse matrix in the CSC format MATLAB uses, generating this same 200 by 10 million sparse sign embedding takes just 0.4 seconds, a speedup of 130× over the slow MATLAB code. A C implementation of the sparse sign embedding that can be used in MATLAB using the MEX interface can be found in this file in the Github repo for my recent paper.

Other Sketching Matrices

Let’s leave off the discussion by mentioning other types of sketching matrices not considered in the empirical comparison above.

Coordinate Sampling

Another family of sketching matrices that we haven’t talked about are coordinate sampling sketches. A coordinate sampling sketch consists of indices 1\le i_1,\ldots,i_d\le n and weights w_1,\ldots,w_d \in \real. To apply S, we sample the indices i_1,\ldots,i_d and reweight them using the weights:

    \[b \in \real^n \longmapsto Sb = (w_1 b_{i_1},\ldots,w_db_{i_d}) \in \real^d.\]

Coordinate sampling is very appealing: To apply S to a matrix or vector requires no matrix multiplication of trigonometric transforms, just picking out some entries or rows and rescaling them.

In order for coordinate sampling to be effective, we need to pick the right indices. Below, we compare two coordinate sampling sketching approaches, uniform sampling and leverage score sampling (both with replacement), to the sparse sign embedding with the suggested parameter setting \zeta = \max(8,\lceil 2\sqrt{d/k}\rceil) for the hard “Identity” test matrix used above.

We see right away that the uniform sampling fails dramatically on this problem. That’s to be expected. All but 50 of 100,000 rows of A are zero, so picking rows uniformly at random will give nonsense with very high probability. Uniform sampling can work well for matrices A which are “incoherent”, with all rows of A being of “similar importance”.

Conclusion (Uniform sampling). Uniform sampling is a risky method; it works excellently for some problems, but fails spectacularly for others. Use only with caution!

The ridge leverage score sampling method is more interesting. Unlike all the other sketches we’ve discussed in this post, ridge leverage score sampling is data-dependent. First, it computes a leverage score \ell_i for each row of A and then samples rows with probabilities proportional to these scores. For high enough values of d, ridge leverage score sampling performs slightly (but only slightly) worse than the characteristic \varepsilon = \sqrt{k/d} scaling we expect for an oblivious subspace embedding.

Ultimately, leverage score sampling has two disadvantages when compared with oblivious sketching matrices:

  • Higher distortion, higher variance. The distortion of a leverage score sketch is higher on average, and more variable, than an oblivious sketch, which achieve very consistent performance.
  • Computing the leverage scores. In order to implement this sketch, the leverage scores \ell_i have to first be computed or estimated. This is a nontrivial algorithmic problem; the most direct way of computing the leverage scores requires a QR decomposition at \mathcal{O}(nk^2) cost, much higher than other types of sketches.

There are settings when coordinate sampling methods, such as leverage scores, are well-justified:

  • Structured matrices. For some matrices A, the leverage scores might be very cheap to compute or approximate. In such cases, coordinate sampling can be faster than oblivious sketching.
  • “Active learning”. For some problems, each entry of the vector b or row of the matrix A may be expensive to generate. In this case, coordinate sampling has the distinct advantage that computing Sb or SA only requires generating the entries of b or rows of A for the d randomly selected indices i_1,\ldots,i_d.

Ultimately, oblivious sketching and coordinate sampling both have their place as tools in the computational toolkit. For the reasons described above, I believe that oblivious sketching should usually be preferred to coordinate sampling in the absence of a special reason to prefer the latter.

Tensor Random Embeddings

There are a number of sketching matrices with tensor structure; see here for a survey. These types of sketching matrices are very well-suited to tensor computations. If tensor structure is present in your application, I would put these types of sketches at the top of my list for consideration.

CountSketch

The CountSketch sketching matrix is the \zeta = 1 case of the sparse sign embedding. CountSketch has serious deficiencies, and should only be used in practice with extreme care.

Consider the “Identity” test matrix from above but with parameter k = 200, and compare the distortion of CountSketch to the sparse sign embedding with parameters \zeta=2,4,8:

We see that the distortion of the CountSketch remains persistently high at 100% until the sketching dimension d is taken >4300, 20× higher than k.

CountSketch is bad because it requires d to be proportional to k^2/\varepsilon^2 in order to achieve distortion \varepsilon. For all of the other sketching matrices we’ve considered, we’ve only required d to be proportional to k/\varepsilon^2 (or perhaps (k\log k)/\varepsilon^2). This difference between d\propto k^2 for CountSketch and d\propto k for other sketching matrices is a at the root of CountSketch’s woefully bad performance on some inputs.3Here, the symbol \propto is an informal symbol meaning “proportional to”.

The fact that CountSketch requires d\propto k^2 is simple to show. It’s basically a variant on the famous birthday problem. We include a discussion at the end of this post.4In fact, any oblivious sketching matrix with only 1 nonzero entry per column must have d\gtrsim k^2. This is Theorem 16 in the following paper.

There are ways of fixing the CountSketch. For instance, we can use a composite sketch S = S_2 \cdot S_1, where S_1 is a CountSketch of size k^2/\varepsilon^2 \times n and S_2 is a Gaussian sketching matrix of size k/\varepsilon^2 \times k^2/\varepsilon^2.5This construction is from this paper. For most applications, however, salvaging CountSketch doesn’t seem worth it; sparse sign embeddings with even \zeta = 2 nonzeros per column are already way more effective and reliable than a plain CountSketch.

Conclusion

By now, sketching is quite a big field, with dozens of different proposed constructions for sketching matrices. So which should you use? For most use cases, sparse sign embeddings are a good choice; they are fast to construct and apply and offer uniformly good distortion across a range of matrices.

Bonus: CountSketch and the Birthday Problem
The point of this bonus section is to prove the following (informal) theorem:

Let A be the “Identity” test matrix above. If S\in\real^{d\times n} is a CountSketch matrix with output dimension d\ll k^2, then the distortion of S for \operatorname{col}(A) is \varepsilon\ge 1 with high probability.

Let’s see why. By the structure of the matrix A, SA has the form

    \[SA = \begin{bmatrix} s_1 & \cdots & s_k \end{bmatrix}\]

where each vector s_i\in\real^d has a single \pm1 in a uniformly random location j\_i.

Suppose that the indices j_1,\ldots,j_k are not all different from each other, say j_i = j_{i'}. Set x = e_i - e_{i'}, where e_i is the standard basis vector with 1 in position i and zeros elsewhere. Then, (SA)x = 0 but \norm{x} = \sqrt{2}. Thus, for the distortion relation

    \[(1-\varepsilon) \norm{x} =(1-\varepsilon)\sqrt{2} \le 0 = \norm{(SA)x}\]

to hold, \varepsilon \ge 1. Thus,

    \[\prob \{ \varepsilon \ge 1 \} \ge \prob \{ j_1,\ldots,j_k \text{ are not distinct} \}.\]

For a moment, let’s put aside our analysis of the CountSketch, and turn our attention to a famous puzzle, known as the birthday problem:

How many people have to be in a room before there’s at least a 50% chance that two people share the same birthday?

The counterintuitive or “paradoxical” answer: 23. This is much smaller than many people’s intuition, as there are 365 possible birthdays and 23 is much smaller than 365.

The reason for this surprising result is that, in a room of 23 people, there are {23 \choose 2} = 23\cdot 22/2=253 pairs of people. Each pair of people has a 1/365 chance of sharing a birthday, so the expected number of birthdays in a room of 23 people is 253/365 \approx 0.69. Since are 0.69 birthdays shared on average in a room of 23 people, it is perhaps less surprising that 23 is the critical number at which the chance of two people sharing a birthday exceeds 50%.

Hopefully, the similarity between the birthday problem and CountSketch is becoming clear. Each pair of indices j_i and j_{i'} in CountSketch have a 1/d chance of being the same. There are {k\choose 2} \approx k^2/2 pairs of indices, so the expected number of equal indices j_i = j_{i'} is \approx k^2/2d. Thus, we should anticipate d \gtrapprox k^2 is required to ensure that j_1,\ldots,j_k are distinct with high probability.

Let’s calculate things out a bit more precisely. First, realize that

    \[\prob \{ j_1,\ldots,j_k \text{ are not distinct} \} = 1 - \prob \{ j_1,\ldots,j_k \text{ are distinct} \}.\]

To compute the probability that j_1,\ldots,j_k are distinct, imagine introducing each j_i one at a time. Assuming that j_1,\ldots,j_{i-1} are all distinct, the probability j_1,\ldots,j_i are distinct is just the probability that j_i does not take any of the i-1 values j_1,\ldots,j_i. This probability is

    \[\prob\{ j_1,\ldots,j_i \text{ are distinct} \mid j_1,\ldots,j_{i-1} \text{ are distinct}\} = 1 - \frac{i-1}{d}.\]

Thus, by the chain rule for probability,

    \[\prob \{ j_1,\ldots,j_k \text{ are distinct} \} = \prod_{i=1}^k \left(1 - \frac{i-1}{d} \right).\]

To bound this quantity, use the numeric inequality 1-x\le \exp(-x) for every x \in \real, obtaining

    \[\mathbb{P} \{ j_1,\ldots,j_k \text{ are distinct} \} \le \prod_{i=0}^{k-1} \exp\left(-\frac{i}{d}\right) = \exp \left( -\frac{1}{d}\sum_{i=0}^{k-1} i \right) = \exp\left(-\frac{k(k-1)}{2d}\right).\]

Thus, we conclude that

    \[\prob \{ \varepsilon \ge 1 \} \ge 1-\prob \{ j_1,\ldots,j_k \text{ are distinct} \\}\ge 1-\exp\left(-\frac{k(k-1)}{2d}\right).\]

Solving this inequality, we conclude that

    \[\prob\{\varepsilon \ge 1\} \ge \frac{1}{2} \quad \text{if} \quad d \le \frac{k(k-1)}{2\ln 2}.\]

This is a quantitative version of our informal theorem from earlier.

Does Sketching Work?

I’m excited to share that my paper, Fast and forward stable randomized algorithms for linear least-squares problems has been released as a preprint on arXiv.

With the release of this paper, now seemed like a great time to discuss a topic I’ve been wanting to write about for a while: sketching. For the past two decades, sketching has become a widely used algorithmic tool in matrix computations. Despite this long history, questions still seem to be lingering about whether sketching really works:

In this post, I want to take a critical look at the question “does sketching work”? Answering this question requires answering two basic questions:

  1. What is sketching?
  2. What would it mean for sketching to work?

I think a large part of the disagreement over the efficacy of sketching boils down to different answers to these questions. By considering different possible answers to these questions, I hope to provide a balanced perspective on the utility of sketching as an algorithmic primitive for solving linear algebra problems.

Sketching

In matrix computations, sketching is really a synonym for (linear) dimensionality reduction. Suppose we are solving a problem involving one or more high-dimensional vectors b \in \real^n or perhaps a tall matrix A\in \real^{n\times k}. A sketching matrix is a d\times n matrix S \in \real^{d\times n} where d \ll n. When multiplied into a high-dimensional vector b or tall matrix A, the sketching matrix S produces compressed or “sketched” versions Sb and SA that are much smaller than the original vector b and matrix A.

Let \mathsf{E}=\{x_1,\ldots,x_p\} be a collection of vectors. For S to be a “good” sketching matrix for \mathsf{E}, we require that S preserves the lengths of every vector in \mathsf{E} up to a distortion parameter \varepsilon>0:

(1)   \[(1-\varepsilon) \norm{x}\le\norm{Sx}\le(1+\varepsilon)\norm{x} \]

for every x in \mathsf{E}.

For linear algebra problems, we often want to sketch a matrix A. In this case, the appropriate set \mathsf{E} that we want our sketch to be “good” for is the column space of the matrix A, defined to be

    \[\operatorname{col}(A) \coloneqq \{ Ax : x \in \real^k \}.\]

Remarkably, there exist many sketching matrices that achieve distortion \varepsilon for \mathsf{E}=\operatorname{col}(A) with an output dimension of roughly d \approx k / \varepsilon^2. In particular, the sketching dimension d is proportional to the number of columns k of A. This is pretty neat! We can design a single sketching matrix S which preserves the lengths of all infinitely-many vectors Ax in the column space of A.

Sketching Matrices

There are many types of sketching matrices, each with different benefits and drawbacks. Many sketching matrices are based on randomized constructions in the sense that entries of S are chosen to be random numbers. Broadly, sketching matrices can be classified into two types:

  • Data-dependent sketches. The sketching matrix S is constructed to work for a specific set of input vectors \mathsf{E}.
  • Oblivious sketches. The sketching matrix S is designed to work for an arbitrary set of input vectors \mathsf{E} of a given size (i.e., \mathsf{E} has p elements) or dimension (\mathsf{E} is a k-dimensional linear subspace).

We will only discuss oblivious sketching for this post. We will look at three types of sketching matrices: Gaussian embeddings, subsampled randomized trignometric transforms, and sparse sign embeddings.

The details of how these sketching matrices are built and their strengths and weaknesses can be a little bit technical. All three constructions are independent from the rest of this article and can be skipped on a first reading. The main point is that good sketching matrices exist and are fast to apply: Reducing b\in\real^n to Sb\in\real^{d} requires roughly \mathcal{O}(n\log n) operations, rather than the \mathcal{O}(dn) operations we would expect to multiply a d\times n matrix and a vector of length n.1Here, \mathcal{O}(\cdot) is big O notation.

Gaussian Embeddings

The simplest type of sketching matrix S\in\real^{d\times n} is obtained by (independently) setting every entry of S to be a Gaussian random number with mean zero and variance 1/d. Such a sketching matrix is called a Gaussian embedding.2Here, embedding is a synonym for sketching matrix.

Benefits. Gaussian embeddings are simple to code up, requiring only a standard matrix product to apply to a vector Sb or matrix SA. Gaussian embeddings admit a clean theoretical analysis, and their mathematical properties are well-understood.

Drawbacks. Computing Sb for a Gaussian embedding costs \mathcal{O}(dn) operations, significantly slower than the other sketching matrices we will consider below. Additionally, generating and storing a Gaussian embedding can be computationally expensive.

Subsampled Randomized Trigonometric Transforms

The subsampled randomized trigonometric transform (SRTT) sketching matrix takes a more complicated form. The sketching matrix is defined to be a scaled product of three matrices

    \[S = \sqrt{\frac{n}{d}} \cdot R \cdot F \cdot D.\]

These matrices have the following definitions:

  • D\in\real^{n\times n} is a diagonal matrix whose entries are each a random \pm 1 (chosen independently with equal probability).
  • F\in\real^{n\times n} is a fast trigonometric transform such as a fast discrete cosine transform.3One can also use the ordinary fast Fourier transform, but this results in a complex-valued sketch.
  • R\in\real^{d\times n} is a selection matrix. To generate R, let i_1,\ldots,i_d be a random subset of \{1,\ldots,n\}, selected without replacement. R is defined to be a matrix for which Rb = (b_{i_1},\ldots,b_{i_d}) for every vector b.

To store S on a computer, it is sufficient to store the n diagonal entries of D and the d selected coordinates i_1,\ldots,i_d defining R. Multiplication of S against a vector b should be carried out by applying each of the matrices R, F, and D in sequence, such as in the following MATLAB code:

% Generate randomness for S
signs = 2*randi(2,m,1)-3; % diagonal entries of D
idx = randsample(m,d); % indices i_1,...,i_d defining R

% Multiply S against b
c = signs .* b % multiply by D
c = dct(c) % multiply by F
c = c(idx) % multiply by R
c = sqrt(n/d) * c % scale

Benefits. S can be applied to a vector b in \mathcal{O}(n \log n) operations, a significant improvement over the \mathcal{O}(dn) cost of a Gaussian embedding. The SRTT has the lowest memory and random number generation requirements of any of the three sketches we discuss in this post.

Drawbacks. Applying S to a vector requires a good implementation of a fast trigonometric transform. Even with a high-quality trig transform, SRTTs can be significantly slower than sparse sign embeddings (defined below).4For an example, see Figure 2 in this paper. SRTTs are hard to parallelize.5Block SRTTs are more parallelizable, however. In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2, larger than for a Gaussian sketch.

Sparse Sign Embeddings

A sparse sign embedding takes the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & s_2 & \cdots & s_n \end{bmatrix}.\]

Here, each column s_i is an independently generated random vector with exactly \zeta nonzero entries with random \pm 1 values in uniformly random positions. The result is a d\times n matrix with only \zeta \cdot n nonzero entries. The parameter \zeta is often set to a small constant like 8 in practice.6This recommendation comes from the following paper, and I’ve used this parameter setting successfully in my own work.

Benefits. By using a dedicated sparse matrix library, S can be very fast to apply to a vector b (either \mathcal{O}(n) or \mathcal{O}(n\log k) operations) to apply to a vector, depending on parameter choices (see below). With a good sparse matrix library, sparse sign embeddings are often the fastest sketching matrix by a wide margin.

Drawbacks. To be fast, sparse sign embeddings requires a good sparse matrix library. They require generating and storing roughly \zeta n random numbers, higher than SRTTs (roughly n numbers) but much less than Gaussian embeddings (dn numbers). In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2 and the sparsity should be set to \zeta \approx (\log k)/\varepsilon; the theoretically sanctioned sketching dimension (at least according to existing theory) is larger than for a Gaussian sketch. In practice, we can often get away with using d \approx k/\varepsilon^2 and \zeta=8.

Summary

Using either SRTTs or sparse maps, a sketching a vector of length n down to d dimensions requires only \mathcal{O}(n) to \mathcal{O}(n\log n) operations. To apply a sketch to an entire n\times k matrix A thus requires roughly \mathcal{O}(nk) operations. Therefore, sketching offers the promise of speeding up linear algebraic computations involving A, which typically take \mathcal{O}(nk^2) operations.

How Can You Use Sketching?

The simplest way to use sketching is to first apply the sketch to dimensionality-reduce all of your data and then apply a standard algorithm to solve the problem using the reduced data. This approach to using sketching is called sketch-and-solve.

As an example, let’s apply sketch-and-solve to the least-squares problem:

(2)   \[\operatorname*{minimize}_{x\in\real^k} \norm{Ax - b}. \]

We assume this problem is highly overdetermined with A having many more rows n than columns m.

To solve this problem with sketch-and-solve, generate a good sketching matrix S for the set \mathsf{E} = \operatorname{col}(\onebytwo{A}{b}). Applying S to our data A and b, we get a dimensionality-reduced least-squares problem

(3)   \[\operatorname*{minimize}_{\hat{x}\in\real^k} \norm{(SA)\hat{x} - Sb}. \]

The solution \hat{x} is the sketch-and-solve solution to the least-squares problem, which we can use as an approximate solution to the original least-squares problem.

Least-squares is just one example of the sketch-and-solve paradigm. We can also use sketching to accelerate other algorithms. For instance, we could apply sketch-and-solve to clustering. To cluster data points x_1,\ldots,x_p, first apply sketching to obtain Sx_1,\ldots,Sx_p and then apply an out-of-the-box clustering algorithms like k-means to the sketched data points.

Does Sketching Work?

Most often, when sketching critics say “sketching doesn’t work”, what they mean is “sketch-and-solve doesn’t work”.

To address this question in a more concrete setting, let’s go back to the least-squares problem (2). Let x_\star denote the optimal least-squares solution and let \hat{x} be the sketch-and-solve solution (3). Then, using the distortion condition (1), one can show that

    \[\norm{A\hat{x} - b} \le \frac{1+\varepsilon}{1-\varepsilon} \norm{Ax - b}.\]

If we use a sketching matrix with a distortion of \varepsilon = 1/3, then this bound tells us that

(4)   \[\norm{A\hat{x} - b} \le 2\norm{Ax_\star - b}. \]

Is this a good result or a bad result? Ultimately, it depends. In some applications, the quality of a putative least-squares solution \hat{x} is can be assessed from the residual norm \norm{A\hat{x} - b}. For such applications, the bound (4) ensures that \norm{A\hat{x} - b} is at most twice \norm{Ax_\star-b}. Often, this means \hat{x} is a pretty decent approximate solution to the least-squares problem.

For other problems, the appropriate measure of accuracy is the so-called forward error \norm{\hat{x} - x_\star}, measuring how close \hat{x} is to x_\star. For these cases, it is possible that \norm{\hat{x} - x_\star} might be large even though the residuals are comparable (4).

Let’s see an example, using the MATLAB code from my paper:

[A, b, x, r] = random_ls_problem(1e4, 1e2, 1e8, 1e-4); % Random LS problem
S = sparsesign(4e2, 1e4, 8); % Sparse sign embedding
sketch_and_solve = (S*A) \ (S*b); % Sketch-and-solve
direct = A \ b; % MATLAB mldivide

Here, we generate a random least-squares problem of size 10,000 by 100 (with condition number 10^8 and residual norm 10^{-4}). Then, we generate a sparse sign embedding of dimension d = 400 (corresponding to a distortion of roughly \varepsilon \approx 1/2). Then, we compute the sketch-and-solve solution and, as reference, a “direct” solution by MATLAB’s \.

We compare the quality of the sketch-and-solve solution to the direct solution, using both the residual and forward error:

fprintf('Residuals: sketch-and-solve %.2e, direct %.2e, optimal %.2e\n',...
           norm(b-A*sketch_and_solve), norm(b-A*direct), norm(r))
fprintf('Forward errors: sketch-and-solve %.2e, direct %.2e\n',...
           norm(x-sketch_and_solve), norm(x-direct))

Here’s the output:

Residuals: sketch-and-solve 1.13e-04, direct 1.00e-04, optimal 1.00e-04
Forward errors: sketch-and-solve 1.06e+03, direct 8.08e-07

The sketch-and-solve solution has a residual norm of 1.13\times 10^{-4}, close to direct method’s residual norm of 1.00\times 10^{-4}. However, the forward error of sketch-and-solve is 1\times 10^3 nine orders of magnitude larger than the direct method’s forward error of 8\times 10^{-7}.

Does sketch-and-solve work? Ultimately, it’s a question of what kind of accuracy you need for your application. If a small-enough residual is all that’s needed, then sketch-and-solve is perfectly adequate. If small forward error is needed, sketch-and-solve can be quite bad.

One way sketch-and-solve can be improved is by increasing the sketching dimension d and lowering the distortion \varepsilon. Unfortunately, improving the distortion of the sketch is expensive. Because of the relation d \approx k /\varepsilon^2, to decrease the distortion by a factor of ten requires increasing the sketching dimension d by a factor of one hundred! Thus, sketch-and-solve is really only appropriate when a low degree of distortion \varepsilon is necessary.

Iterative Sketching: Combining Sketching with Iteration

Sketch-and-solve is a fast way to get a low-accuracy solution to a least-squares problem. But it’s not the only way to use sketching for least-squares. One can also use sketching to obtain high-accuracy solutions by combining sketching with an iterative method.

There are many iterative methods for least-square problems. Iterative methods generate a sequence of approximate solutions x_1,x_2,\ldots that we hope will converge at a rapid rate to the true least-squares solution, x_\star.

To using sketching to solve least-squares problems iteratively, we can use the following observation:

If S is a sketching matrix for \mathsf{E} = \operatorname{col}(A), then (SA)^\top SA \approx A^\top A.

Therefore, if we compute a QR factorization

    \[SA = QR,\]

then

    \[A^\top A \approx (SA)^\top (SA) = R^\top Q^\top Q R = R^\top R.\]

Notice that we used the fact that Q^\top Q = I since Q has orthonormal columns. The conclusion is that R^\top R \approx A^\top A.

Let’s use the approximation R^\top R \approx A^\top A to solve the least-squares problem iteratively. Start off with the normal equations7As I’ve described in a previous post, it’s generally inadvisable to solve least-squares problems using the normal equations. Here, we’re just using the normal equations as a conceptual tool to derive an algorithm for solving the least-squares problem.

(5)   \[(A^\top A)x = A^\top b. \]

We can obtain an approximate solution to the least-squares problem by replacing A^\top A by R^\top R in (5) and solving. The resulting solution is

    \[x_0 = R^{-1} (R^{-\top}(A^\top b)).\]

This solution x_0 will typically not be a good solution to the least-squares problem (2), so we need to iterate. To do so, we’ll try and solve for the error x - x_0. To derive an equation for the error, subtract A^\top A x_0 from both sides of the normal equations (5), yielding

    \[(A^\top A)(x-x_0) = A^\top (b-Ax_0).\]

Now, to solve for the error, substitute R^\top R for A^\top A again and solve for x, obtaining a new approximate solution x_1:

    \[x\approx x_1 \coloneqq x_0 + R^{-\top}(R^{-1}(A^\top(b-Ax_0))).\]

We can now go another step: Derive an equation for the error x-x_1, approximate A^\top A \approx R^\top R, and obtain a new approximate solution x_2. Continuing this process, we obtain an iteration

(6)   \[x_{i+1} = x_i + R^{-\top}(R^{-1}(A^\top(b-Ax_i))).\]

This iteration is known as the iterative sketching method.8The name iterative sketching is for historical reasons. Original versions of the procedure involved taking a fresh sketch S_iA = Q_iR_i at every iteration. Later, it was realized that a single sketch SA suffices, albeit with a slower convergence rate. Typically, only having to sketch and QR factorize once is worth the slower convergence rate. If the distortion is small enough, this method converges at an exponential rate, yielding a high-accuracy least squares solution after a few iterations.

Let’s apply iterative sketching to the example we considered above. We show the forward error of the sketch-and-solve and direct methods as horizontal dashed purple and red lines. Iterative sketching begins at roughly the forward error of sketch-and-solve, with the error decreasing at an exponential rate until it reaches that of the direct method over the course of fourteen iterations. For this problem, at least, iterative sketching gives high-accuracy solutions to the least-squares problem!

To summarize, we’ve now seen two very different ways of using sketching:

  • Sketch-and-solve. Sketch the data A and b and solve the sketched least-squares problem (3). The resulting solution \hat{x} is cheap to obtain, but may have low accuracy.
  • Iterative sketching. Sketch the matrix A and obtain an approximation R^\top R = (SA)^\top (SA) to A^\top A. Use the approximation R^\top R to produce a sequence of better-and-better least-squares solutions x_i by the iteration (6). If we run for enough iterations q, the accuracy of the iterative sketching solution x_q can be quite high.

By combining sketching with more sophisticated iterative methods such as conjugate gradient and LSQR, we can get an even faster-converging least-squares algorithm, known as sketch-and-precondition. Here’s the same plot from above with sketch-and-precondition added; we see that sketch-and-precondition converges even faster than iterative sketching does!

“Does sketching work?” Even for a simple problem like least-squares, the answer is complicated:

A direct use of sketching (i.e., sketch-and-solve) leads to a fast, low-accuracy solution to least-squares problems. But sketching can achieve much higher accuracy for least-squares problems by combining sketching with an iterative method (iterative sketching and sketch-and-precondition).

We’ve focused on least-squares problems in this section, but these conclusions could hold more generally. If “sketching doesn’t work” in your application, maybe it would if it was combined with an iterative method.

Just How Accurate Can Sketching Be?

We left our discussion of sketching-plus-iterative-methods in the previous section on a positive note, but there is one last lingering question that remains to be answered. We stated that iterative sketching (and sketch-and-precondition) converge at an exponential rate. But our computers store numbers to only so much precision; in practice, the accuracy of an iterative method has to saturate at some point.

An (iterative) least-squares solver is said to be forward stable if, when run for a sufficient number q of iterations, the final accuracy \norm{x_q - x_\star} is comparable to accuracy of a standard direct method for the least-squares problem like MATLAB’s \ command or Python’s scipy.linalg.lstsq. Forward stability is not about speed or rate of convergence but about the maximum achievable accuracy.

The stability of sketch-and-precondition was studied in a recent paper by Meier, Nakatsukasa, Townsend, and Webb. They demonstrated that, with the initial iterate x_0 = 0, sketch-and-precondition is not forward stable. The maximum achievable accuracy was worse than standard solvers by orders of magnitude! Maybe sketching doesn’t work after all?

Fortunately, there is good news:

  • The iterative sketching method is provably forward stable. This result is shown in my newly released paper; check it out if you’re interested!
  • If we use the sketch-and-solve method as the initial iterate x_0 = \hat{x} for sketch-and-precondition, then sketch-and-precondition appears to be forward stable in practice. No theoretical analysis supporting this finding is known at present.9For those interested, neither iterative sketching nor sketch-and-precondition are backward stable, which is a stronger stability guarantee than forward stability. Fortunately, forward stability is a perfectly adequate stability guarantee for many—but not all—applications.

These conclusions are pretty nuanced. To see what’s going, it can be helpful to look at a graph:10For another randomly generated least-squares problem of the same size with condition number 10^{10} and residual 10^{-6}.

The performance of different methods can be summarized as follows: Sketch-and-solve can have very poor forward error. Sketch-and-precondition with the zero initialization x_0 = 0 is better, but still much worse than the direct method. Iterative sketching and sketch-and-precondition with x_0 = \hat{x} fair much better, eventually achieving an accuracy comparable to the direct method.

Put more simply, appropriately implemented, sketching works after all!

Conclusion

Sketching is a computational tool, just like the fast Fourier transform or the randomized SVD. Sketching can be used effectively to solve some problems. But, like any computational tool, sketching is not a silver bullet. Sketching allows you to dimensionality-reduce matrices and vectors, but it distorts them by an appreciable amount. Whether or not this distortion is something you can live with depends on your problem (how much accuracy do you need?) and how you use the sketch (sketch-and-solve or with an iterative method).