Big Ideas in Applied Math: Low-rank Matrices

Let’s start our discussion of low-rank matrices with an application. Suppose that there are 1000 weather stations spread across the world, and we record the temperature during each of the 365 days in a year.1I borrow the idea for the weather example from Candes and Plan. If we were to store each of the temperature measurements individually, we would need to store 365,000 numbers. However, we have reasons to believe that significant compression is possible. Temperatures are correlated across space and time: If it’s hot in Arizona today, it’s likely it was warm in Utah yesterday.

If we are particularly bold, we might conjecture that the weather approximately experiences a sinusoidal variation over the course of the year:

(1)   \begin{equation*} \mbox{temperature at station $i$ on day $j$} \approx a_i + b_i \sin\left( 2\pi \times \frac{j}{365} + \phi \right). \end{equation*}

For a station i, a_i denotes the average temperature of the station and b_i denotes the maximum deviation above or below this station, signed so that it is warmer than average in the Northern hemisphere during June-August and colder-than-average in the Southern hemisphere during these months. The phase shift \phi is chosen so the hottest (or coldest) day in the year occurs at the appropriate time. This model is clearly grossly inexact: The weather does not satisfy a simple sinusoidal model. However, we might plausibly expect it to be fairly informative. Further, we have massively compressed our data, only needing to store the 2000 \ll 365,000 numbers a_1,a_2,\ldots,b_{1000} rather than our full data set of 365,000 temperature values.

Let us abstract this approximation procedure in a linear algebraic way. Let’s collect our weather data into a matrix W with 1000 rows, one for each station, and 365 columns, one for each day of the year. The entry W_{ij} corresponding to station i and day j is the temperature at station i on day j. The approximation Eq. (1) corresponds to the matrix approximation

(2)   \begin{equation*} W \approx \underbrace{\begin{bmatrix} a_1 + b_1 \sin\left( 2\pi \times \frac{1}{365} + \phi \right) & \cdots & a_1 + b_1 \sin\left( 2\pi \times \frac{365}{365} + \phi \right) \\ \vdots & \ddots & \vdots \\ a_{1000} + b_{1000} \sin\left( 2\pi \times \frac{1}{365} + \phi \right) & \cdots & a_{1000} + b_{1000} \sin\left( 2\pi \times \frac{365}{365} + \phi \right) \end{bmatrix}}_{:=\hat{W}}. \end{equation*}

Let us call the matrix on the right-hand side of Eq. (2) \hat{W} for ease of discussion. When presented in this linear algebraic form, it’s less obvious in what way \hat{W} is simpler than W, but we know from Eq. (1) and our previous discussion that \hat{W} is much more efficient to store than W. This leads us naturally to the following question: Linear algebraically, in what way is \hat{W} simpler than W?

The answer is that the matrix \hat{W} has low rank. The rank of the matrix \hat{W} is 2 whereas W almost certainly possesses the maximum possible rank of 365. This example is suggestive that low-rank approximation, where we approximate a general matrix by one of much lower rank, could be a powerful tool. But there any many questions about how to use this tool and how widely applicable it is. How can we compress a low-rank matrix? Can we use this compressed matrix in computations? How good of a low-rank approximation can we find? What even is the rank of a matrix?

What is Rank?

Let’s do a quick review of the foundations of linear algebra. At the core of linear algebra is the notion of a linear combination. A linear combination of vectors v_1,\ldots,v_k is a weighted sum of the form \alpha_1 v_1 + \cdots + \alpha_k v_k, where \alpha_1,\ldots,\alpha_k are scalars2In our case, matrices will be comprised of real numbers, making scalars real numbers as well.. A collection of vectors v_1,\ldots,v_k is linearly independent if there is no linear combination of them which produces the zero vector, except for the trivial 0-weighted linear combination 0 v_1 + \cdots + 0v_k. If v_1,\ldots,v_k are not linearly independent, then they’re linearly dependent.

The column rank of a matrix B is the size of the largest possible subset of B‘s columns which are linearly independent. So if the column rank of B is r, then there is some sub-collection of r columns of B which are linearly independent. There may be some different sub-collections of r columns from B that are linearly dependent, but every collection of r+1 columns is guaranteed to be linearly dependent. Similarly, the row rank is defined to be the maximum size of any linearly independent collection of rows taken from B. A remarkable and surprising fact is that the column rank and row rank are equal. Because of this, we refer to the column rank and row rank simply as the rank; we denote the rank of a matrix B by \operatorname{rank}(B).

Linear algebra is famous for its multiple equivalent ways of phrasing the same underlying concept, so let’s mention one more way of thinking about the rank. Define the column space of a matrix to consist of the set of all linear combinations of its columns. A basis for the column space is a linear independent collection of elements of the column space of the largest possible size. Every element of the column space can be written uniquely as a linear combination of the elements in a basis. The size of a basis for the column space is called the dimension of the column space. With these last definitions in place, we note that the rank of B is also equal to the dimension of the column space of B. Likewise, if we define the row space of B to consist of all linear combinations of B‘s rows, then the rank of B is equal to the dimension of B‘s row space.

The upshot is that if a matrix B has a small rank, its many columns (or rows) can be assembled as linear combinations from a much smaller collection of columns (or rows). It is this fact that allows a low-rank matrix to be compressed for algorithmically useful ends.

Rank Factorizations

Suppose we have an m\times n matrix B which is of rank r much smaller than both m and n. As we saw in the introduction, we expect that such a matrix can be compressed to be stored with many fewer than mn entries. How can this be done?

Let’s work backwards and start with the answer to this question and then see why it works. Here’s a fact: a matrix B of rank r can be factored as B = LR^\top, where L is an m\times r matrix and R is an n\times r matrix. In other words, B can be factored as a “thin” matrix L with r columns times a “fat” matrix R^\top with r rows. We use the symbols L and R for these factors to stand for “left” and “right”; we emphasize that L and R are general m\times r and n\times r matrices, not necessarily possessing any additional structure.3Readers familiar with numerical linear algebra may instinctively want to assume that L and R are lower and upper triangular; we do not make this assumption. The fact that we write the second term in this factorization as a transposed matrix “R^\top” is unimportant: We adopt a convention where we write a fat matrix as the transpose of a thin matrix. This notational choice is convenient allows us to easily distinguish between thin and fat matrices in formulas; this choice of notation is far from universal. We call a factorization such as B = LR^\top a rank factorization.4Other terms, such as full rank factorization or rank-revealing factorization, have been been used to describe the same concept. A warning is that the term “rank-revealing factorization” can also refer to a factorization which encodes a good low-rank approximation to B rather than a genuine factorization of B.

Rank factorizations are useful as we can compactly store B by storing its factors L and R. This reduces the storage requirements of B to (m+n)r numbers down from mn numbers. For example, if we store a rank factorization of the low-rank approximation \hat{W} from our weather example, we need only store 2,730 numbers rather than 365,000. In addition to compressing B, we shall soon see that one can rapidly perform many calculations from the rank factorization LR^\top = B without ever forming B itself. For these reasons, whenever performing computations with a low-rank matrix, your first step should almost always be to express it using a rank factorization. From there, most computations can be done faster and using less storage.

Having hopefully convinced ourselves of the usefulness of rank factorizations, let us now convince ourselves that every rank-r matrix B does indeed possess a rank factorization B = LR^\top where L and R have r columns. As we recalled in the previous section, since B has rank r, there is a basis of B‘s column space consisting of r vectors \ell_1,\ldots,\ell_r. Collect these r vectors as columns of an m\times r matrix L = \begin{bmatrix} \ell_1 & \cdots & \ell_r\end{bmatrix}. But since the columns of L comprise a basis of the column space of B, every column of B can be written as a linear combination of the columns of L. For example, the jth column b_j of B can be written as a linear combination b_j = R_{j1} \ell_1 + \cdots + R_{jr} \ell_r, where we suggestively use the labels R_{j1},\ldots,R_{jr} for the scalar multiples in our linear combination. Collecting these coefficients into a matrix R with ijth entry R_{ij}, we have constructed a factorization B = LR^\top. (Check this!)

This construction gives us a look at what a rank factorization is doing. The columns of L comprise a basis for the column space of B and the rows of R^\top comprise a basis for the row space of B. Once we fix a “column basis” L, the “row basis” R^\top is comprised of linear combination coefficients telling us how to assemble the columns of B as linear combinations of the columns in L.5It is worth noting here that a slightly more expansive definition of rank factorization has also proved useful. In the more general definition, a rank factorization is a factorization of the form B =LMR^\top where L is m\times r, M is r\times r, and R^\top is r\times n. With this definition, we can pick an arbitrary column basis L and row basis R^\top. Then, there exists a unique nonsingular “middle” matrix M such that B = LMR^\top. Note that this means there exist many different rank factorizations of a matrix since one may pick different column bases L for B.6This non-uniqueness means one should take care to compute a rank factorization which is as “nice” as possible (say, by making sure L and R are as well-conditioned as is possible). If one modifies a rank factorization during the course of an algorithm, one should take care to make sure that the rank factorization remains nice. (As an example of what can go wrong, “unbalancing” between the left and right factors in a rank factorization can lead to convergence problems for optimization problems.)

Now that we’ve convinced ourselves that every matrix indeed has a rank factorization, how do we compute them in practice? In fact, pretty much any matrix factorization will work. If you can think of a matrix factorization you’re familiar with (e.g., LU, QR, eigenvalue decomposition, singular value decomposition,…), you can almost certainly use it to compute a rank factorization. In addition, many dedicated methods have been developed for the specific purpose of computing rank factorizations which can have appealing properties which make them great for certain applications.

Let’s focus on one particular example of how a classic matrix factorization, the singular value decomposition, can be used to get a rank factorization. Recall that the singular value decomposition (SVD) of a (real) matrix B is a factorization B = U\Sigma V^\top where U and V are an m\times m and n\times n (real) orthogonal matrices and \Sigma is a (possibly rectangular) diagonal matrix with nonnegative, descending diagonal entries \sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_{\min(m,n)}. These diagonal entries are referred to as the singular values of the matrix B. From the definition of rank, we can see that the rank of a matrix B is equal to its number of nonzero singular values. With this observation in hand, a rank factorization of B can be obtained by letting L be the first r columns of U and R^\top being the first r rows of \Sigma V^\top (note that the remaining rows of \Sigma V^\top are zero).

Computing with Rank Factorizations

Now that we have a rank factorization in hand, what is it good for? A lot, in fact. We’ve already seen that one can store a low-rank matrix expressed as a rank factorization using only (m+n)r numbers, down from mn numbers by storing all of its entries. Similarly, if we want to compute the matrix-vector product Bx for a vector x of length n, we can compute this product as Bx = L(R^\top x). This reduces the operation count down from 2mn operations to 2(m+n)r operations using the rank factorization. As a general rule of thumb, when we have something expressed as a rank factorization, we can usually expect to reduce our operation count (and storage costs) from something proportional to mn (or worse) down to something proportional to m+n.

Let’s try something more complicated. Say we want to compute an SVD B = U\Sigma V^\top of B. In the previous section, we computed a rank factorization of B using an SVD, but suppose now we computed B = LR^\top in some other way. Our goal is to “upgrade” the general rank factorization B = LR^\top into an SVD of B. Computing the SVD of a general matrix B requires \mathcal{O}(mn\min(m,n)) operations (expressed in big O notation). Can we do better? Unfortunately, there’s a big roadblock for us: We need m^2+n^2 operations even to write down the matrices U and V, which already prevents us from achieving an operation count proportional to m+n like we’re hoping for. Fortunately, in most applications, only the first r columns of U and V are important. Thus, we can change our goal to compute a so-called economy SVD of B, which is a factorization B = \hat{U}\hat{\Sigma}\hat{V}^\top, where \hat{U} and \hat{V} are m\times r and n\times r matrices with orthonormal columns and \hat{\Sigma} is a r\times r diagonal matrix listing the nonzero singular values of B in decreasing order.

Let’s see how to upgrade a rank factorization B = LR^\top into an economy SVD B = \hat{U}\hat{\Sigma}\hat{V}^\top. Let’s break our procedure into steps:

  1. Compute (economy7The economy QR factorization of an m\times r thin matrix C is a factorization C=QR where Q is an m\times r matrix with orthonormal columns and R is a r\times r upper triangular matrix. The economy QR factorization is sometimes also called a thin or compact QR factorization, and can be computed in \mathcal{O}(mr^2) operations.) QR factorizations of L and R: L = Q_1T_1 and R = Q_2 T_2. Reader beware: We call the “R” factor in the QR factorizations of L and R to be T_1 and T_2, as we have already used the letter R to denote the second factor in our rank factorization.
  2. Compute the small matrix S = T_1T_2^\top.
  3. Compute an SVD of S=\tilde{U}\hat{\Sigma}\tilde{V}^\top.
  4. Set \hat{U} := Q_1\tilde{U} and \hat{V} := Q_2\tilde{V}.

By following the procedure line-by-line, one can check that indeed the matrices \hat{U} and \hat{V} have orthonormal columns and B = \hat{U}\hat{\Sigma}\hat{V}^\top, so this procedure indeed computes an economy SVD of B. Let’s see why this approach is also faster. Let’s count operations line-by-line:

  1. Economy QR factorization of an m\times r and n\times r matrix require \mathcal{O}(mr^2) and \mathcal{O}(nr^2) operations.
  2. The product of two r\times r matrices requires \mathcal{O}(r^3) operations.
  3. The SVD of an r\times r matrix requires \mathcal{O}(r^3) operations.
  4. The products of a m\times r and a n\times r matrix by r\times r matrices requires \mathcal{O}(mr^2) and \mathcal{O}(nr^2) operations.

Accounting for all the operations, we see the operation count is \mathcal{O}((m+n)r^2), a significant improvement over the \mathcal{O}(mn\min(m,n)) operations for a general matrix.8We can ignore the term of order \mathcal{O}(r^3) since r \le m,n so r^3 is \mathcal{O}((m+n)r^2)).

As the previous examples show, many (if not most) things we want to compute from a low-rank matrix B can be dramatically more efficiently computed using its rank factorization. The strategy is simple in principle, but can be subtle to execute: Whatever you do, avoid explicitly computing the product LR^\top at all costs. Instead, compute with the matrices L and R directly, only operating on m\times r, n\times r, and r\times r matrices.

Another important type of computation one can perform with low-rank matrices are low-rank updates, where we have already solved a problem for a matrix A and we want to re-solve it efficiently with the matrix A+B where B has low rank. If B is expressed in a rank factorization, very often we can do this efficiently as well, as we discuss in the following bonus section. As this is somewhat more niche, the uninterested reader should feel free to skip this and continue to the next section.

Low-rank Updates
Suppose we’ve solved a system of linear equations Ax = b by computing an LU factorization of the n\times n matrix A. We now wish to solve the system of linear equations (A+B)y = c, where B is a low-rank matrix expressed as a rank factorization B = LR^\top. Our goal is to do this without recomputing a new LU factorization from scratch. 

The first solution uses the Sherman-Morrison-Woodbury formula, which has a nice proof via the Schur complement and block Gaussian elimination which I described here. In our case, the formula yields

(3)   \begin{equation*} (A+B)^{-1} = (I_n-(A^{-1}L)(I_r+R^\top(A^{-1}L))^{-1}R^\top)A^{-1}, \end{equation*}

where I_n and I_r denote the n\times n and r\times r identity matrices. This formula can easily verified by multiplying with A+B and confirming one indeed recovers the identity matrix. This formula suggests the following approach to solving (A+B)y = c. First, use our already-computed LU factorization for A to compute S:=A^{-1}L. (This involves solving r linear systems of the form As = p to compute each column s of S from each column p of P.) We then compute an LU factorization of the much smaller r\times r matrix I_r+R^\top S. Finally, we use our LU factorization of A once more to compute z = A^{-1}c, from which our solution y = (A+B)^{-1}c is given by

(4)   \begin{equation*} y = (I_n-(A^{-1}L)(I_r+R^\top(A^{-1}L))^{-1}R^\top)A^{-1}c = z - S((I_r+R^\top S)^{-1}(R^\top z)). \end{equation*}

The net result is we solved our rank-r-updated linear system using r+1 solutions of the original linear system with no need to recompute any LU factorizations of n\times n matrices. We’ve reduced the solution of the system (A+B)y=c to an operation count of \mathcal{O}(n^2r) which is dramatically better than the \mathcal{O}(n^3) operation count of recomputing the LU factorization from scratch.

This simple example demonstrates a broader pattern: Usually if a matrix problem took \mathcal{O}(n^3) to solve originally, one can usually solve the problem after a rank-r update in an additional time of only something like \mathcal{O}(n^2r) operations.9Sometimes, this goal of \mathcal{O}(n^2r) can be overly optimistic. For symmetric eigenvalue problems, for instance, the operation count may be a bit larger by a (poly)logarithmic factor—say something like \mathcal{O}(n^2r\log n). An operation count like this still represents a dramatic improvement over the operation count \mathcal{O}(n^3) of recomputing by scratch. For instance, not only can we solve rank-r-updated linear systems in \mathcal{O}(n^2r) operations, but we can actually update the LU factorization itself in \mathcal{O}(n^2r) operations. Similar updates exist for Cholesky, QR, symmetric eigenvalue, and singular value decompositions to update these factorizations in \mathcal{O}(n^2r) operations.

An important caveat is that, as always with linear algebraic computations, it’s important to read the fine print. There are many algorithms for computing low-rank updates to different matrix factorizations with dramatically different accuracy properties. Just because in principle rank-updated versions of these factorizations can be computed doesn’t mean it’s always advisable. With this qualification stated, these ways of updating matrix computations with low-rank updates can be a powerful tool in practice and reinforce the computational benefits of low-rank matrices expressed via rank factorizations.

Low-rank Approximation

As we’ve seen, computing with low-rank matrices expressed as rank factorizations can yield significant computational savings. Unfortunately, many matrices in application are not low-rank. In fact, even if a matrix in an application is low-rank, the small rounding errors we incur in storing it on a computer may destroy the matrix’s low rank, increasing its rank to the maximum possible value of \min(m,n). The solution in this case is straightforward: approximate our high-rank matrix with a low-rank one, which we express in algorithmically useful form as a rank factorization.

Here’s one simple way of constructing low-rank approximations. Start with a matrix B and compute a singular value decomposition of B, B = U\Sigma V^\top. Recall from two sections previous that the rank of the matrix B is equal to its number of nonzero singular values. But what if B‘s singular values aren’t exactly zero, but they’re very small? It seems reasonable to expect that B is nearly low-rank in this case. Indeed, this intuition is true. To approximate B a low-rank matrix, we can truncate B‘s singular value decomposition by setting B‘s small singular values to zero. If we zero out all but the r largest singular values of B, this procedure results in a rank-r matrix \hat{B} which approximates B. If the singular values that we zeroed out were tiny, then \hat{B} will be very close to B and the low-rank approximation is accurate. This matrix \hat{B} is called an r-truncated singular value decomposition of B, and it is easy to represent it using a rank factorization once we have already computed an SVD of B.

It is important to remember that low-rank approximations are, just as the name says, approximations. Not every matrix is well-approximated by one of small rank. A matrix may be excellently approximated by a rank-100 matrix and horribly approximated by a rank-90 matrix. If an algorithm uses a low-rank approximation as a building block, then the approximation error (the difference between B and its low-rank approximation \hat{B}) and its propagations through further steps of the algorithm need to be analyzed and controlled along with other sources of error in the procedure.

Despite this caveat, low-rank approximations can be startlingly effective. Many matrices occurring in practice can be approximated to negligible error by a matrix with very modestly-sized rank. We shall return to this surprising ubiquity of approximately low-rank matrices at the end of the article.

We’ve seen one method for computing low-rank approximations, the truncated singular value decomposition. As we shall see in the next section, the truncated singular value decomposition produces excellent low-rank approximations, the best possible in a certain sense, in fact. As we mentioned above, almost every matrix factorization can be used to compute rank factorizations. Can these matrix factorizations also compute high quality low-rank approximations?

Let’s consider a specific example to see the underlying ideas. Say we want to compute a low-rank approximation to a matrix B by a QR factorization. To do this, we want to compute a QR factorization B = QR and then throw away all but the first r columns of Q and the first r rows of R. This will be a good approximation if the rows we discard from R are “small” compared to the rows of R we keep. Unfortunately, this is not always the case. As a worst case example, if the first r columns of B are zero, then the first r rows of R will definitely be zero and the low-rank approximation computed this way is worthless.

We need to modify something to give QR factorization a fighting chance for computing good low-rank approximations. The simplest way to do this is by using column pivoting, where we shuffle the columns of B around to bring columns of the largest size “to the front of the line” as we computing the QR factorization. QR factorization with column pivoting produces excellent low-rank approximations in a large number of cases, but it can still give poor-quality approximations for some special examples. For this reason, numerical analysts have developed so-called strong rank-revealing QR factorizations, such as the one developed by Gu and Eisenstat, which are guaranteed to compute quite good low-rank approximations for every matrix B. Similarly, there exists a strong rank-revealing LU factorizations which can compute good low-rank approximations using LU factorization.

The upshot is that most matrix factorizations you know and love can be used to compute good-quality low-rank approximations, possibly requiring extra tricks like row or column pivoting. But this simple summary, and the previous discussion, leaves open important questions: what do we mean by good-quality low-rank approximations? How good can a low-rank approximation be?

Best Low-rank Approximation

As we saw in the last section, one way to approximate a matrix by a lower rank matrix is by a truncated singular value decomposition. In fact, in some sense, this is the best way of approximating a matrix by one of lower rank. This fact is encapsulated in a theorem commonly referred to as the Eckart–Young theorem, though the essence of the result is originally due to Schmidt and the modern version of the result to Mirsky.10A nice history of the Eckart–Young theorem is provided in the book Matrix Perturbation Theory by Stewart and Sun.

But what do we mean by best approximation? One ingredient we need is a way of measuring how big the discrepancy between two matrices is. Let’s define a measure of the size of a matrix E which we will call E‘s norm, which we denote as \|E\|. If B is a matrix and \hat{B} is a low-rank approximation to it, then \hat{B} is a good approximation to B if the norm \|B-\hat{B}\| is small. There might be many different ways of measuring the size of the error, but we have to insist on a couple of properties on our norm \|\cdot\| for it to really define a sensible measure of size. For instance if the norm of a matrix E is \|E\| = 5, then the norm of 10E should be \|10E\| = 10\|E\| = 50. A list of the properties we require a norm to have are listed on the Wikipedia page for norms. We shall also insist on one more property for our norm: the norm should be unitarily invariant.11Note that every unitarily invariant norm is a special type of vector norm (called a symmetric gauge function) evaluated on the singular values of the matrix. What this means is the norm of a matrix E remains the same if it is multiplied on the left or right by an orthogonal matrix. This property is reasonable since multiplication by orthogonal matrices geometrically represents a rotation or reflection12This is not true in dimensions higher than 2, but it gives the right intuition that orthogonal matrices preserve distances. which preserves distances between points, so it makes sense that we should demand that the size of a matrix as measured by our norm does not change by such multiplications. Two important and popular matrix norms satisfy the unitarily invariant property: the Frobenius norm \| E\|_{\rm F} = \sum_{ij} |E_{ij}|^2 and the spectral (or operator 2-) norm \| E \|_{\rm op} = \sigma_{\rm max}(E), which measures the largest singular value.13Both the Frobenius and spectral norms are examples of an important subclass of unitarily invariant norms called Schatten norms. Another example of a Schatten norm, important in matrix completion, is the nuclear norm (sum of the singular values).

With this preliminary out of the way, the Eckart–Young theorem states that the truncated singular value decomposition of B truncated to rank r is the closest of all rank-r matrices B when distances are measured using any unitarily invariant norm \|\cdot\|. If we let B_r denote the r-truncated singular value decomposition of B, then the Eckart–Young theorem states that

(5)   \begin{equation*} \| B - B_r \| \le \|B - C\| \mbox{ for all matrices $C$ of rank $r$}. \end{equation*}

Less precisely, the r-truncated singular value decomposition is the best rank-r approximation to a matrix.

Let’s unpack the Eckart–Young theorem using the spectral and Frobenius norms. In this context, a brief calculation and the Eckart–Young theorem proves that for any rank-r matrix C, we have

(6)   \begin{equation*} \| B - C \|_{\rm op} \ge \sigma_{r+1},\quad \| B - C\|_{\rm F} \ge \sqrt{\sum_{j>r} \sigma_j^2}, \end{equation*}

where \sigma_1,\sigma_2,\ldots are the singular values of B. This bound is quite intuitive. The error in low-rank approximation will be “small” when we measure the error in the spectral norm when each singular value we zero out is “small”. When we measure error in the Frobenius norm, the error in low-rank approximation is “small” when all of the singular values we zero out are “small” in aggregate when squared and added together.

The Eckart–Young theorem shows that possessing a good low-rank approximation is equivalent to the singular values rapidly decaying.14At least when measured in unitarily invariant norms. A surprising result shows that even the identity matrix, whose singular values are all equal to one, has good low-rank approximations in the maximum entrywise absolute value norm; see, e.g., Theorem 1.0 in this article. If a matrix does not have nice singular value decay, no good low-rank approximation exists, computed by the r-truncated SVD or otherwise.

Why Are So Many Matrices (Approximately) Low-rank?

As we’ve seen, we can perform computations with low-rank matrices represented using rank factorizations much faster than general matrices. But all of this would be a moot point if low-rank matrices rarely occurred in practice. But in fact precisely the opposite is true: Approximately low-rank matrices occur all the time in practice.

Sometimes, exact low-rank matrices appear for algebraic reasons. For instance, when we perform one step Gaussian elimination to compute an LU factorization, the lower right portion of the eliminated matrix, the so-called Schur complement, is a rank-one update to the original matrix. In such cases, a rank-r matrix might appear in a computation when one performs r steps of some algebraic process: The appearance of low-rank matrices in such cases is unsurprising.

However, often, matrices appearing in applications are (approximately) low-rank for analytic reasons instead. Consider the weather example from the start again. One might reasonably model the temperature on Earth as a smooth function T(\cdot,\cdot) of position x and time t. If we then let x_i denote the position on Earth of station i and t_j the time representing the jth day of a given year, then the entries of the W matrix are given by W_{ij} = T(x_i,t_j). As discussed in my article on smoothness and degree of approximation, a smooth function function of one variable can be excellently approximated by, say, a polynomial of low degree. Analogously, a smooth function depending on two arguments, such as our function T(\cdot,\cdot), can be excellently be approximated by a separable expansion of rank r:

(7)   \begin{equation*} T(x,t) \approx \phi_1(x) \psi_1(t) + \cdots + \phi_r(x) \psi_r(t). \end{equation*}

Similar to functions of a single variable, the degree to which a function T(\cdot,\cdot) can to be approximated by a separable function of small rank depends on the degree smoothness of the function T(\cdot,\cdot). Assuming the function T(\cdot,\cdot) is quite smooth, then T(\cdot,\cdot) can be approximated has a separable expansion of small rank r. This leads immediately to a low-rank approximation to the matrix W given by the rank factorization

(8)   \begin{equation*} W = \begin{bmatrix} \phi_1(x_1) & \cdots & \phi_r(x_1) \\ \vdots & \ddots & \vdots \\ \phi_1(x_{1000}) & \cdots & \phi_r(x_{1000}) \end{bmatrix}\begin{bmatrix} \psi_1(t_1) & \cdots & \psi_r(t_1) \\ \vdots & \ddots & \vdots \\ \psi_1(t_{365}) & \cdots & \psi_r(t_{365}) \end{bmatrix}^\top. \end{equation*}

Thus, in the context of our weather example, we see that the data matrix can be expected to be low-rank under the reasonable-sounding assumption that the temperature depends smoothly on space and time.

What does this mean in general? Let’s speak informally. Suppose that the ijth entries of a matrix B are samples f(x_i,y_j) from a smooth function f(\cdot,\cdot) for points x_1,\ldots,x_m and y_1,\ldots,y_n. Then we can expect that B will be approximately low-rank. From a computational point of view, we don’t need to know a separable expansion for the function f(\cdot,\cdot) or even the form of the function f(\cdot,\cdot) itself: If the smooth function f(\cdot,\cdot) exists and B is sampled from it, then B is approximately low-rank and we can find a low-rank approximation for B using the truncated singular value decomposition.15Note here an important subtlety. A more technically precise version of what we’ve stated here is that: if f(\cdot,\cdot) depending on inputs x and y is sufficiently smooth for (x,y) in the product of compact regions \Omega_x and \Omega_y, then an m\times n matrix B_{ij} = f(x_i,y_j) with x_i \in \Omega_x and y_j \in \Omega_y will be low-rank in the sense that it can be approximated to accuracy \epsilon by a rank-r matrix where r grows slowly as m and n increase and \epsilon decreases. Note that, phrased this way, the low-rank property of B is asymptotic in the size m and n and the accuracy \epsilon. If f(\cdot,\cdot) is not smooth on the entirety of the domain \Omega_x\times \Omega_y or the size of the domains \Omega_x and \Omega_y grow with m and n, these asymptotic results may no longer hold. And if m and n are small enough or \epsilon is large enough, B may not be well approximated by a matrix of small rank. Only when there are enough rows and columns will meaningful savings from low-rank approximation be possible.

This “smooth function” explanation for the prevalence of low-rank matrices is the reason for the appearance of low-rank matrices in fast multipole method-type fast algorithms in computational physics and has been proposed16This article considers piecewise analytic functions rather than smooth functions; the principle is more-or-less the same. as a general explanation for the prevalence of low-rank matrices in data science.

(Another explanation for low-rank structure for highly structured matrices like Hankel, Toeplitz, and Cauchy matrices17Computations with these matrices can often also be accelerated with other approaches than low-rank structure; see my post on the fast Fourier transform for a discussion of fast Toeplitz matrix-vector products. which appear in control theory applications has a different explanation involving a certain Sylvester equation; see this lecture for a great explanation.)

Upshot: A matrix is low-rank if it has many fewer linearly independent columns than columns. Such matrices can be efficiently represented using rank-factorizations, which can be used to perform various computations rapidly. Many matrices appearing in applications which are not genuinely low-rank can be well-approximated by low-rank matrices; the best possible such approximation is given by the truncated singular value decomposition. The prevalence of low-rank matrices in diverse application areas can partially be explained by noting that matrices sampled from smooth functions are approximately low-rank.

7 thoughts on “Big Ideas in Applied Math: Low-rank Matrices

  1. I have been looking for motivations for why low-rank matrices are natural and wasn’t able to find any satisfactory answers.. Thank you very much for your excellent tutorial!

  2. Could you describe what phi is? I could not find a definition of it in the blog post. It seems perhaps it is related to location i, but then it would be subscripted. Perhaps it is just used to correct alignment between day 1 and peak of sinusoid.

    1. Thanks for the question Paul. You’re absolutely right: ϕ in equation (1) is a phase shift chosen to align make the peaks of the sinusoid occur at the hottest (coldest) days of the year. I’ve edited the text to make this clear.

Leave a Reply

Your email address will not be published. Required fields are marked *