Big Ideas in Applied Math: Markov Chains

In this post, we’ll talk about Markov chains, a useful and general model of a random system evolving in time.

PageRank

To see how Markov chains can be useful in practice, we begin our discussion with the famous PageRank problem￼. The goal is assign a numerical ranking to each website on the internet measuring how important it is. To do this, we form a mathematical model of an internet user randomly surfing the web. The importance of each website will be measured by the amount of times this user visits each page.

The PageRank model of an internet user is as follows: Start the user at an arbitrary initial website . At each step, the user makes one of two choices:

• With 85% probability, the user follows a random link on their current website.
• With 15% probability, the user gets bored and jumps to a random website selected from the entire internet.

As with any mathematical model, this is a riduculously oversimplified description of how a person would surf the web. However, like any good mathematical model, it is useful. Because of the way the model is designed, the user will spend more time on websites with many incoming links. Thus, websites with many incoming links will be rated as important, which seems like a sensible choice.

An example of the PageRank distribution for a small internet is shown below. As one would expect, the surfer spends a large part of their time on website B, which has many incoming links. Interestingly, the user spends almost as much of their time on website C, whose only links are to and from B. Under the PageRank model, a website is important if it is linked to by an important website, even if that is the only website linking to it.

Markov Chains in General

Having seen one Markov chain, the PageRank internet surfer, let’s talk about Markov chains in general. A (time-homogeneous) Markov chain consists of two things: a set of states and probabilities for transitioning between states:

• Set of states. For this discussion, we limit ourselves to Markov chains which can only exist in finitely many different states. To simplify our discussion, label the possible states using numbers .
• Transition probabilities. The definining property of a (time-homogeneous) Markov chain is that, at any point in time , if the state is , the probability of moving to state is a fixed number . In particular, the probability of moving from to does not depend on the time or the past history of the chain before time ; only the value of the chain at time matters.

Denote the state of the Markov chain at times by . Note that the states are random quantities. We can write the Markov chain property using the language of conditional probability:

This equation states that the probability that the system is in state at time given the entire history of the system depends only on the value of the chain at time . This probability is the transition probability .

Let’s see how the PageRank internet surfer fits into this model:

• Set of states. Here, the set of states are the websites, which we label .
• Transition probabilities. Consider two websites and . If does not have a link to , then the only way of going from to is if the surfer randomly gets bored (probability 15%) and picks website to visit at random (probability ). Thus,

()

Suppose instead that does link to and has outgoing links. Then, in addition to the probability computed before, user has an 85% percent chance of following a link and a chance of picking as that link. Thus,

()

Markov Chains and Linear Algebra

For a non-random process , we can understand the processes evolution by determining its state at every point in time . Since Markov chains are random processes, it is not enough to track the state of the process at every time . Rather, we must understand the probability distribution of the state at every point in time .

It is customary in Markov chain theory to represent a probability distribution on the states by a row vector .1To really emphasize that probability distributions are row vectors, we shall write them as transposes of column vectors. So is a column vector but represents the probability distribution as is a row vector. The th entry stores the probability that the system is in state . Naturally, as is a probability distribution, its entries must be nonnegative ( for every ) and add to one ().

Let denote the probability distributions of the states . It is natural to ask: How are the distributions related to each other? Let’s answer this question.

The probability that is in state is the th entry of :

To compute this probability, we break into cases based on the value of the process at time : either or or … or ; only one of these cases can be true at once. When we have an “or” of random events and these events are mutually exclusive (only can be true at once), then the probabilities add:

Now we use some conditional probability. The probability that and is the probability that times the probability that conditional on . That is,

Now, we can simplify using our definitions. The probability that is just and the probability of moving from to is . Thus, we conclude

Phrased in the language of linear algebra, we’ve shown

That is, if we view the transition probabilities as comprising an matrix , then the distribution at time is obtained by multiplying the distribution at time by transition matrix . In particular, if we iterate this result, we obtain that the distribution at time is given by

Thus, the distribution at time is the distribution at time multiplied by the th power of the transition matrix .

Convergence to Stationarity

Let’s go back to our web surfer again. At time , we started our surfer at a particular website, say . As such, the probability distribution2To keep notation clean going forward, we will drop the transposes off of probability distributions, except when working with them linear algebraically. at time is concentrated just on website , with no other website having any probability at all. In the first few steps, most of the probability will remain in the vacinity of website , in the websites linked to by and the websites linked to by the websites linked to by and so on. However, as we run the chain long enough, the surfer will have time to widely across the web and the probability distribution will become less and less influenced by the chain’s starting location. This motivates the following definition:

Definition. A Markov chain satisfies the mixing property if the probability distributions converge to a single fixed probability distribution regardless of how the chain is initialized (i.e., independent of the starting distribution ).

The distribution for a mixing Markov chain is known as a stationary distribution because it does not change under the action of :

(St)

To see this, recall the recurrence

take the limit as , and observe that both and converge to .

One of the basic questions in the theory of Markov chains is finding conditions under which the mixing property (or suitable weaker versions of it) hold. To answer this question, we will need the following definition:

A Markov chain is primitive if, after running the chain for some number steps, the chain has positive probability of moving between any two states. That is,

The fundamental theorem of Markov chains is that primitive chains satisfy the mixing property.

Theorem (fundamental theorem of Markov chains). Every primitive Markov chain is mixing. In particular, there exists one and only probability distribution satisfying the stationary property (St) and the probability distributions converge to when initialized in any probability distribution . Every entry of is strictly positive.

Let’s see an example of the fundamental theorem with the PageRank surfer. After step, there is at least a chance of moving from any website to any other website . Thus, the chain is primitive. Consequently, there is a unique stationary distribution , and the surfer will converge to this stationary distribution regardless of which website they start at.

Going Backwards in Time

Often, it is helpful to consider what would happen if we ran a Markov chain backwards in time. To see why this is an interesting idea, suppose you run website and you’re interested in where your traffic is coming from. One way of achieving this would be to initialize the Markov chain at and run the chain backwards in time. Rather than asking, “given I’m at now, where would a user go next?”, you ask “given that I’m at now, where do I expect to have come from?”

Let’s formalize this notion a little bit. Consider a primitive Markov chain with stationary distribution . We assume that we initialize this Markov chain in the stationary distribution. That is, we pick as our initial distribution for . The time-reversed Markov chain is defined as follows: The probability of moving from to in the time-reversed Markov chain is the probability that I was at state one step previously given that I’m at state now:

To get a nice closed form expression for the reversed transition probabilities , we can invoke Bayes’ theorem:

(Rev)

The time-reversed Markov chain can be a strange beast. For the reversed PageRank surfer, for instance, follows links “upstream” traveling from the linked site to the linking site. As such, our hypothetical website owner could get a good sense of where their traffic is coming from by initializing the reversed chain at their website and following the chain one step back.

Reversible Markov Chains

We now have two different Markov chains: the original and its time-reversal. Call a Markov chain reversible if these processes are the same. That is, if the transition probabilities are the same:

Using our formula (Rev) for the reversed transition probability, the reversibility condition can be written more concisely as

This condition is referred to as detailed balance.3There is an abstruse—but useful—way of reformulating the detailed balance condition. Think of a vector as defining a function on the set , . Letting denote a random variable drawn from the stationary distribution , we can define a non-standard inner product on : . Then the Markov chain is reversible if and only if detailed balance holds if and only if is a self-adjoint operator on when equipped with the non-standard inner product . This more abstract characterization has useful consequences. For instance, by the spectral theorem, the transition matrix of a reversible Markov chain has real eigenvalues and supports a basis of orthonormal eigenvectors (in the inner product). In words, it states that a Markov chain is reversible if, when initialized in the stationary distribution , the flow of probability mass from to (that is, ) is equal to the flow of probability mass from to (that is, ).

Many interesting Markov chains are reversible. One class of examples are Markov chain models of physical and chemical processes. Since physical laws like classical and quantum mechanics are reversible under time, so too should we expect Markov chain models built from theories to be reversible.

Not every interesting Markov chain is reversible, however. Indeed, except in special cases, the PageRank Markov chain is not reversible. If links to but. does not link to , then the flow of mass from to will be higher than the flow from to .

Before moving on, we note one useful fact about reversible Markov chains. Suppose a reversible, primitive Markov chain satisfies the detailed balance condition with a probability distribution :

Then is the stationary distribution of this chain. To see why, we check the stationarity condition . Indeed, for every ,

The second equality is detailed balance and the third equality is just the condition that the sum of the transition probabilities from to each is one. Thus, and is a stationary distribution for . But a primitive chain has only one stationary distribution , so .

Markov Chains as Algorithms

Markov chains are an amazingly flexible tool. One use of Markov chains is more scientific: Given a system in the real world, we can model it by a Markov chain. By simulating the chain or by studying its mathematical properties, we can hope to learn about the system we’ve modeled.

Another use of Markov chains is algorithmic. Rather than thinking of the Markov chain as modeling some real-world process, we instead design the Markov chain to serve a computationally useful end. The PageRank surfer is one example. We wanted to rank the importance of websites, so we designed a Markov chain to achieve this task.

One task we can use Markov chains to solve are sampling problems. Suppose we have a complicated probability distribution , and we want a random sample from —that is, a random quantity such that for every . One way to achieve this goal is to design a primitive Markov chain with stationary distribution . Then, we run the chain for a large number of steps and use as an approximate sample from .

To design a Markov chain with stationary distribution , it is sufficient to generate transition probabilities such that and satisfy the detailed balance condition. Then, we are guaranteed that is a stationary distribution for the chain. (We also should check the primitiveness condition, but this is often straightforward.)

Here is an effective way of building a Markov chain to sample from a distribution . Suppose that the chain is in state at time , . To choose the next state, we begin by sampling from a proposal distribution . The proposal distribution can be almost anything we like, as long as it satisfies three conditions:

• Probability distribution. For every , the transition probabilitie add to one: .
• Bidirectional. If , then .
• Primitive. The transition probabilities form a primitive Markov chain.

In order to sample from the correct distribution, we can’t just accept every proposal. Rather, given the proposal , we accept with probability

If we accept the proposal, the next state of our chain is . Otherwise, we stay where we are . This Markov chain is known as a Metropolis–Hastings sampler.

For clarity, we list the steps of the Metropolis–Hastings sampler explicitly:

1. Initialize the chain in any state and set .
2. Draw a proposal with from the proposal distribution, .
3. Compute the acceptance probability

4. With probability , set . Otherwise, set .
5. Set and go back to step 2.

To check that is a stationary distribution of the Metropolis–Hastings distribution, all we need to do is check detailed balance. Note that the probability of transitioning from to under the Metropolis–Hastings sampler is the proposal probability times the acceptance probability:

Detailed balance is confirmed by a short computation4Note that the detailed balance condition for is always satisfied for any Markov chain .

Thus the Metropolis–Hastings sampler has as stationary distribution.

Determinatal Point Processes: Diverse Items from a Collection

The uses of Markov chains in science, engineering, math, computer science, and machine learning are vast. I wanted to wrap up with one application that I find particularly neat.

Suppose I run a bakery and I sell different baked goods. I want to pick out special items for a display window to lure customers into my store. As a first approach, I might pick my top- selling items for the window. But I realize that there’s a problem. All of my top sellers are muffins, so all of the items in my display window are muffins. My display window is doing a good job luring in muffin-lovers, but a bad job of enticing lovers of other baked goods. In addition to rating the popularity of each item, I should also promote diversity in the items I select for my shop window.

Here’s a creative solution to my display case problems using linear algebra. Suppose that, rather than just looking at a list of the sales of each item, I define a matrix for my baked goods. In the th entry of my matrix, I write the number of sales for baked good . I populate the off-diagonal entries of my matrix with a measure of similarity between items and .5There are many ways of defining such a similarity matrix. Here is one way. Let be the number ordered for each bakery item by a random customer. Set to be the correlation matrix of the random variables , with being the correlation between the random variables and . The matrix has all ones on its diagonal. The off-diagonal entries measure the amount that items and tend to be purchased together. Let be a diagonal matrix where is the total sales of item . Set . By scaling by the diagonal matrix , the diagonal entries of represent the popularity of each item, whereass the off-diagonal entries still represent correlations, now scaled by popularity. So if and are both muffins, will be large. But if is a muffin and is a cookie, then will be small. For mathematical reasons, we require to be symmetric and positive definite.

To populate my display case, I choose a random subset of items from my full menu of size according to the following strange probability distribution: The probability of picking items is proportional to the determinant of the submatrix . More specifically,

(-DPP)

Here, we let denote the submatrix of consisting of the entries appearing in rows and columns . Such a random subset is known as a -determinantal point process (-DPP). (See this survey for more about DPPs.)

To see why this makes any sense, let’s consider a simple example of items and a display case of size . Suppose I have three items: a pumpkin muffin, a chocolate chip muffin, and an oatmeal raisin cookies. Say the matrix looks like

We see that both muffins are equally popular and much more popular than the cookie . However, the two muffins are similar to each other and thus the corresponding submatrix has small determinant

By contrast, if the cookie is disimilar to each muffin and the determinant is higher

Thus, even though the muffins are more popular overall, choosing our display case from a -DPP, we have a chance of choosing a muffin and a cookie for our display case. It is for this reason that we can say that a -DPP preferentially selects for diverse items.

Is sampling from a -DPP the best way of picking items for my display case? How does it compare to other possible methods?6Another method I’m partial to for this task is randomly pivoted Cholesky sampling, which is computationally cheaper than -DPP sampling even with the Markov chain sampling approach to -DPP sampling that we will discuss shortly. These are interesting questions for another time. For now, let us focus our attention on a different question: How would you sample from a -DPP?

Determinantal Point Process by Markov Chains

Sampling from a -DPP is a hard computational problem. Indeed, there are possible -element subspaces of a set of items. The number of possibilities gets large fast. If I have items and want to pick of them, there are already over 10 trillion possible combinations.

Markov chains offer one compelling way of sampling a -DPP. First, we need a proposal distribution. Let’s choose the simplest one we can think of:

Proposal for -DPP sampling. Suppose our current set of items is . To generate a proposal, choose a uniformly random element out of and a uniformly random element out of without . Propose obtained from by replacing with (i.e., ).

Now, we need to compute the Metropolis–Hastings acceptance probability

For and which differ only by the addition of one element and the removal of another, the proposal probabilities and are both equal to , . Using the formula for the probability of drawing from a -DPP, we compute that

Thus, the Metropolis–Hastings acceptance probability is just a ratio of determinants:

(Acc)

And we’re done. Let’s summarize our sampling algorithm:

1. Choose an initial set arbitrarily and set .
2. Draw uniformly at random from .
3. Draw uniformly at random from .
4. Set .
5. With probability defined in (Acc), accept and set . Otherwise, set .
6. Set and go to step 2.

This is a remarkably simple algorithm to sample from a complicated distribution. And its fairly efficient as well. Analysis by Anari, Oveis Gharan, and Rezaei shows that, when you pick a good enough initial set , this sampling algorithm produces approximate samples from a -DPP in roughly steps.7They actually use a slight variant of this algorithm where the acceptance probabilities (Acc) are reduced by a factor of two. Observe that this still has the correct stationary distribution because detailed balance continues to hold. The extra factor is introduced to ensure that the Markov chain is primitive. Remarkably, if is much smaller than , this Markov chain-based algorithm samples from a -DPP without even looking at all entries of the matrix !

Upshot. Markov chains are a simple and general model for a state evolving randomly in time. Under mild conditions, Markov chains converge to a stationary distribution: In the limit of a large number of steps, the state of the system become randomly distributed in a way independent of how it was initialized. We can use Markov chains as algorithms to approximately sample from challenging distributions.

Big Ideas in Applied Math: Concentration Inequalities

This post is about randomized algorithms for problems in computational science and a powerful set of tools, known as concentration inequalities, which can be used to analyze why they work. I’ve discussed why randomization can help in solving computational problems in a previous post; this post continues this discussion by presenting an example of a computational problem where, somewhat surprisingly, a randomized algorithm proves effective. We shall then use concentration inequalities to analyze why this method works.

Triangle Counting

Let’s begin our discussion of concentration inequalities by means of an extended example. Consider the following question: How many triangles are there in the Facebook network? That is, how many trios of people are there who are all mutual friends? While seemingly silly at first sight, this is actually a natural and meaningful question about the structure of the Facebook social network and is related to similar questions such as “How likely are two friends of a person to also be friends with each other?”

If there are people on the Facebook graph, then the natural algorithm of iterating over all triplets and checking whether they form a triangle is far too computationally costly for the billions of Facebook accounts. Somehow, we want to do much faster than this, and to achieve this speed we would be willing to settle for an estimate of the triangle count up to some error.

There are many approaches to this problem, but let’s describe a particularly surprising algorithm. Let be an matrix where the th entry of is if users and are friends and otherwise1All of the diagonal entries of are set to zero.; this matrix is called the adjacency matrix of the Facebook graph. A fact from graph theory is that the th entry of the cube of the matrix counts the number of paths from user to user of length three.2By a path of length three, we mean a sequence of users where and , and , and and are all friends. In particular, the th entry of denotes the number of paths from to itself of length , which is twice the number of triangles incident on . (The paths and are both counted as paths of length 3 for a triangle consisting of , , and .) Therefore, the trace of , equal to the sum of its diagonal entries, is six times the number of triangles: The th entry of is twice the number of triangles incident on and each triangle is counted thrice in the th, th, and th entries of . In summary, we have

Therefore, the triangle counting problem is equivalent to computing the trace of . Unfortunately, the problem of computing is, in general, very computationally costly. Therefore, we seek ways of estimating the trace of a matrix without forming it.

Randomized Trace Estimation

Motivated by the triangle counting problem from the previous section, we consider the problem of estimating the trace of a matrix . We assume that we only have access to the matrix through matrix–vector products; that is, we can efficiently compute for a vector . For instance, in the previous example, the Facebook graph has many fewer friend relations (edges) than the maximum possible amount of . Therefore, the matrix is sparse; in particular, matrix–vector multiplications with can be computed in around operations. To compute matrix–vector products with , we simply compute matrix–vector products with three times, .

Here’s a very nifty idea to estimate the trace of using only matrix–vector products, originally due to Didier A. Girard and Michael F. Hutchinson. Choose to be a random vector whose entries are independent -values, where each value and occurs with equal probability. Then if one forms the expression . Since the entries of and are independent, the expectation of is for and for . Consequently, by linearity of expectation, the expected value of is

The average value of is equal to the trace of ! In the language of statistics, we might say that is an unbiased estimator for . Thus, the efficiently computable quantity can serve as a (crude) estimate for .

While the expectation of equals , any random realization of can deviate from by a non-neligible amount. Thus, to reduce the variability of the estimator , it is appropriate to take an average of multiple copies of this random estimate. Specifically, we draw random vectors with independent random entries and compute the averaged trace estimator

(1)

The -sample trace estimator remains an unbiased estimator for , , but with reduced variability. Quantitatively, the variance of is times smaller than the single-sample estimator :

(2)

The Girard–Hutchinson trace estimator gives a natural way of estimating the trace of the matrix , a task which might otherwise be hard without randomness.3To illustrate what randomness is buying us here, it might be instructive to think about how one might try to estimate the trace of via matrix–vector products without the help of randomness. For the trace estimator to be a useful tool, an important question remains: How many samples are needed to compute to a given accuracy? Concentration inequalities answer questions of this nature.

Concentration Inequalities

A concentration inequality provides a bound on the probability a random quantity is significantly larger or smaller than its typical value. Concentration inequalities are useful because they allow us to prove statements like “With at least 99% probability, the randomized trace estimator with 100 samples produces an approximation of the trace which is accurate up to error no larger than .” In other words, concentration inequalities can provide quantitative estimates of the likely size of the error when a randomized algorithm is executed.

In this section, we shall introduce a handful of useful concentration inequalities, which we will apply to the randomized trace estimator in the next section. We’ll then discuss how these and other concentration inequalities can be derived in the following section.

Markov’s Inequality

Markov’s inequality is the most fundamental concentration inequality. When used directly, it is a blunt instrument, requiring little insight to use and producing a crude but sometimes useful estimate. However, as we shall see later, all of the sophisticated concentration inequalities that will follow in this post can be derived from a careful use of Markov’s inequality.

The wide utility of Markov’s inequality is a consequence of the minimal assumptions needed for its use. Let be any nonnegative random variable. Markov’s inequality states that the probability that exceeds a level is bounded by the expected value of over . In equations, we have

(3)

We stress the fact that we make no assumptions on how the random quantity is generated other than that is nonnegative.

As a short example of Markov’s inequality, suppose we have a randomized algorithm which takes one second on average to run. Markov’s inequality then shows that the probability the algorithm takes more than 100 seconds to run is at most . This small example shows both the power and the limitation of Markov’s inequality. On the negative side, our analysis suggests that we might have to wait as much as 100 times the average runtime for the algorithm to complete running with 99% probability; this large huge multiple of 100 seems quite pessimistic. On the other hand, we needed no information whatsoever about how the algorithm works to do this analysis. In general, Markov’s inequality cannot be improved without more assumptions on the random variable .4For instance, imagine an algorithm which 99% of the time completes instantly and 1% of the time takes 100 seconds. This algorithm does have an average runtime of 1 second, but the conclusion of Markov’s inequality that the runtime of the algorithm can be as much as 100 times the average runtime with 1% probability is true.

Chebyshev’s Inequality and Averages

The variance of a random variable describes the expected size of a random variable’s deviation from its expected value. As such, we would expect that the variance should provide a bound on the probability a random variable is far from its expectation. This intuition indeed is correct and is manifested by Chebyshev’s inequality. Let be a random variable (with finite expected value) and . Chebyshev’s inequality states that the probability that deviates from its expected value by more than is at most :

(4)

Chebyshev’s inequality is frequently applied to sums or averages of independent random quantities. Suppose are independent and identically distributed random variables with mean and variance and let denote the average

Since the random variables are independent,5In fact, this calculation works if are only pairwise independent or even pairwise uncorrelated. For algorithmic applications, this means that don’t have to be fully independent of each other; we just need any pair of them to be uncorrelated. This allows many randomized algorithms to be “derandomized“, reducing the amount of “true” randomness needed to execute an algorithm. the properties of variance entail that

where we use the fact that . Therefore, by Chebyshev’s inequality,

(5)

Suppose we want to estimate the mean by up to error and are willing to tolerate a failure probability of . Then setting the right-hand side of (5) to , Chebyshev’s inequality suggests that we need at most

(6)

samples to achieve this goal.

Exponential Concentration: Hoeffding and Bernstein

How happy should we be with the result (6) of applying Chebyshev’s inequality the average ? The central limit theorem suggests that should be approximately normally distributed with mean and variance . Normal random variables have an exponentially small probability of being more than a few standard deviations above their mean, so it is natural to expect this should be true of as well. Specifically, we expect a bound roughly like

(7)

Unfortunately, we don’t have a general result quite this nice without additional assumptions, but there are a diverse array of exponential concentration inequalities available which are quite useful in analyzing sums (or averages) of independent random variables that appear in applications.

Hoeffding’s inequality is one such bound. Let be independent (but not necessarily identically distributed) random variables and consider the average . Hoeffding’s inequality makes the assumption that the summands are bounded, say within an interval .6There are also more general versions of Hoeffding’s inequality where the bound on each random variable is different. Hoeffding’s inequality then states that

(8)

Hoeffding’s inequality is quite similar to the ideal concentration result (7) except with the variance replaced by the potentially much larger quantity7Note that is always smaller than or equal to . .

Bernstein’s inequality fixes this deficit in Hoeffding’s inequality at a small cost. Now, instead of assuming are bounded within the interval , we make the alternate boundedness assumption for every . We continue to denote so that if are identically distributed, denotes the variance of each of . Bernstein’s inequality states that

(9)

For small values of , Bernstein’s inequality yields exactly the kind of concentration that we would hope for from our central limit theorem heuristic (7). However, for large values of , we have

which is exponentially small in rather than . We conclude that Bernstein’s inequality provides sharper bounds then Hoeffding’s inequality for smaller values of but weaker bounds for larger values of .

Chebyshev vs. Hoeffding vs. Bernstein

Let’s return to the situation where we seek to estimate the mean of independent and identically distributed random variables each with variance by using the averaged value . Our goal is to bound how many samples we need to estimate up to error , , except with failure probability at most . Using Chebyshev’s inequality, we showed that (see (7))

Now, let’s try using Hoeffding’s inequality. Suppose that are bounded in the interval . Then Hoeffding’s inequality (8) shows that

Bernstein’s inequality states that if lie in the interval for every , then

(10)

Hoeffding’s and Bernstein’s inequality show that we need roughly proportional to samples are needed rather than proportional to . The fact that we need proportional to samples to achieve error is a consequence of the central limit theorem and is something we would not be able to improve with any concentration inequality. What exponential concentration inequalities allow us to do is to improve the dependence on the failure probability from proportional to to , which is a huge improvement.

Hoeffding’s and Bernstein’s inequalities both have a small drawback. For Hoeffding’s inequality, the constant of proportionality is rather than the true variance of the summands. Bernstein’s inequality gives us the “correct” constant of proportionality but adds a second term proportional to ; for small values of , this term is dominated by the term proportional to but the second term can be relevant for larger values of .

There are a panoply of additional concentration inequalities than the few we’ve mentioned. We give a selected overview in the following optional section.

Other Concentration Inequalities
There are a handful more exponential concentration inequalities for sums of independent random variables such as Chernoff’s inequality (very useful for somes of bounded, positive random variables) and Bennett’s inequality. There are also generalizations of Hoeffding’s, Chernoff’s, and Bernstein’s inequalities for unbounded random variables with subgaussian and subexponential tail decay; these results are documented in Chapter 2 of Roman Vershynin’s excellent book High-Dimensional Probability.

One can also generalize concentration inequalities to so-called martingale sequences, which can be very useful for analyzing adaptive algorithms. These inequalities can often have the advantage of bounding the probability that a martingale sequence ever deviates by some amount from its applications; these results are called maximal inequalities. Maximal analogs of Markov’s and Chebyshev’s inequalities are given by Ville’s inequality and Doob’s inequality. Exponential concentration inequalities include the Hoeffding–Azuma inequality and Freedman’s inequality.

Finally, we note that there are many concentration inequalities for functions of independent random variables other than sums, usually under the assumption that the function is Lipschitz continuous. There are exponential concentration inequalities for functions with “bounded differences”, functions of Gaussian random variables, and convex functions of bounded random variables. References for these results include Chapters 3 and 4 of the lecture notes Probability in High Dimension by Ramon van Handel and the comprehensive monograph Concentration Inequalities by Stéphane Boucheron, Gábor Lugosi, and Pascal Massart.

Analysis of Randomized Trace Estimator

Let us apply some of the concentration inequalities we introduced in last section to analyze the randomized trace estimator. Our goal is not to provide the best possible analysis of the trace estimator,8More precise estimation for trace estimation applied to positive semidefinite matrices was developed by Gratton and Titley-Peloquin; see Theorem 4.5 of the following survey. but to demonstrate how the general concentration inequalities we’ve developed can be useful “out of the box” in analyzing algorithms.

In order to apply Chebyshev’s and Berstein’s inequalities, we shall need to compute or bound the variance of the single-sample trace estimtor , where is a random vector of independent -values. This is a straightforward task using properties of the variance:

Here, is the covariance and is the matrix Frobenius norm. Chebyshev’s inequality (5), then gives

Let’s now try applying an exponential concentration inequality. We shall use Bernstein’s inequality, for which we need to bound . By the Courant–Fischer minimax principle, we know that is between and where and are the smallest and largest eigenvalues of and is the Euclidean norm of the vector . Since all the entries of have absolute value , we have so is between and . Since the trace equals the sum of the eigenvalues of , is also between and . Therefore,

where denotes the matrix spectral norm. Therefore, by Bernstein’s inequality (9), we have

In particular, (10) shows that

samples suffice to estimate to error with failure probability at most . Concentration inequalities easily furnish estimates for the number of samples needed for the randomized trace estimator.

We have now accomplished our main goal of using concentration inequalities to analyze the randomized trace estimator, which in turn can be used to solve the triangle counting problem. We leave some additional comments on trace estimation and triangle counting in the following bonus section.

More on Trace Estimation and Triangle Counting
To really complete the analysis of the trace estimator in an application (e.g., triangle counting), we would need to obtain bounds on and . Since we often don’t know good bounds for and , one should really use the trace estimator together with an a posteriori error estimates for the trace estimator, which provide a confidence interval for the trace rather than a point estimate; see sections 4.5 and 4.6 in this survey for details.

One can improve on the Girard–Hutchinson trace estimator by using a variance reduction technique. One such variance reduction technique was recently proposed under the name Hutch++, extending ideas by Arjun Singh Gambhir, Andreas Stathopoulos, and Kostas Orginos and Lin Lin. In effect, these techniques improve the number of samples needed to estimate the trace of a positive semidefinite matrix to relative error to proportional to down from .

Several algorithms have been proposed for triangle counting, many of them randomized. This survey gives a comparison of different methods for the triangle counting problem, and also describes more motivation and applications for the problem.

Deriving Concentration Inequalities

Having introduced concentration inequalities and applied them to the randomized trace estimator, we now turn to the question of how to derive concentration inequalities. Learning how to derive concentration inequalities is more than a matter of mathematical completeness since one can often obtain better results by “hand-crafting” a concentration inequality for a particular application rather than applying a known concentration inequality. (Though standard concentration inequalities like Hoeffding’s and Bernstein’s often give perfectly adequate answers with much less work.)

Markov’s Inequality

At the most fundamental level, concentration inequalities require us to bound a probability by an expectation. In achieving this goal, we shall make a simple observation: The probability that is larger than or equal to is the expectation of a random variable .9More generally, the probability of an event can be written as an expectation of the indicator random variable of that event. Here, is an indicator function which outputs one if its input is larger than or equal to and zero otherwise.

As promised, the probability is larger than is the expectation of :

(11)

We can now obtain bounds on the probability that by bounding its corresponding indicator function. In particular, we have the inequality

(12)

Since is nonnegative, combining equations (11) and (12) gives Markov’s inequality:

Chebyshev’s Inequality

Before we get to Chebyshev’s inequality proper, let’s think about how we can push Markov’s inequality further. Suppose we find a bound on the indicator function of the form

(13)

A bound of this form immediately to bounds on by (11). To obtain sharp and useful bounds on we seek bounding functions in (13) with three properties:

1. For , should be close to zero,
2. For , should be close to one, and
3. We need to be easily computable or boundable.

These three objectives are in tension with each other. To meet criterion 3, we must restrict our attention to pedestrian functions such as powers or exponentials for which we have hopes of computing or bounding for random variables we encounter in practical applications. But these candidate functions have the undesirable property that making the function smaller on (by increasing ) to meet point 1 makes the function larger on , detracting from our ability to achieve point 2. We shall eventually come up with a best-possible resolution to this dilemma by formulating this as an optimization problem to determine the best choice of the parameter to obtain the best possible candidate function of the given form.

Before we get ahead of ourselves, let us use a specific choice for different than we used to prove Markov’s inequality. We readily verify that satisfies the bound (13), and thus by (12),

(14)

This inequality holds for any nonnegative random variable . In particular, now consider a random variable which we do not assume to be nonnegative. Then ‘s deviation from its expectation, , is a nonnegative random variable. Thus applying (14) gives

We have derived Chebyshev’s inequality! Alternatively, one can derive Chebyshev’s inequality by noting that if, and only if, . Therefore, by Markov’s inequality,

The Laplace Transform Method

We shall now realize the plan outlined earlier where we shall choose an optimal bounding function from the family of exponential functions , where is a parameter which we shall optimize over. This method shall allow us to derive exponential concentration inequalities like Hoeffding’s and Bernstein’s. Note that the exponential function bounds the indicator function for all real numbers , so we shall no longer require the random variable to be nonnegative. Therefore, by (11),

(15)

The functions

are known as the moment generating function and cumulant generating function of the random variable .10These functions are so-named because they are the (exponential) generating functions of the (polynomial) moments , , and the cumulants of . With these notations, (15) can be written

(16)

The moment generating function coincides with the Laplace transform up to the sign of the parameter , so one name for this approach to deriving concentration inequalities is the Laplace transform method. (This method is also known as the Cramér–Chernoff method.)

The cumulant generating function has an important property for deriving concentration inequalities for sums or averages of independent random variables: If are independent random variables, than the cumulant generating function is additive:11For proof, we compute . Taking logarithms proves the additivity.

(17)

Proving Hoeffding’s Inequality

For us to use the Laplace transform method, we need to either compute or bound the cumulant generating function. Since we are interested in general concentration inequalities which hold under minimal assumptions such as boundedness, we opt for the latter. Suppose and consider the cumulant generating function of . Then one can show the cumulant generating function bound12The bound (18) is somewhat tricky to establish, but we can establish the same result with a larger constant than . We have . Since the function is convex, we have the bound . Taking expectations, we have . One can show by comparing Taylor series that . Therefore, we have .

(18)

Using the additivity of the cumulant generating function (17), we obtain the bound

Plugging this into the probability bound (16), we obtain the concentration bound

(19)

We want to obtain the smallest possible upper bound on this probability, so it behooves us to pick the value of which makes the right-hand side of this inequality as small as possible. To do this, we differentiate the contents of the exponential and set to zero, obtaining

Plugging this value for into the bound (19) gives A bound for being larger than :

(20)

To get the bound on being smaller than , we can apply a small trick. If we apply (20) to the summands instead of , we obtain the bounds

(21)

We can now combine the upper tail bound (19) with the lower tail bound (21) to obtain a “symmetric” bound on the probability that . The means of doing often this goes by the fancy name union bound, but the idea is very simple:

Thus, applying this union bound idea with the upper and lower tail bounds (20) and (21), we obtain Hoeffding’s inequality, exactly as it appeared above as (8):

Voilà! Hoeffding’s inequality has been proven! Bernstein’s inequality is proven essentially the same way except that, instead of (17), we have the cumulant generating function bound

for a random variable with mean zero and satisfying the bound .

Upshot: Randomness can be a very effective tool for solving computational problems, even those which seemingly no connection to probability like triangle counting. Concentration inequalities are a powerful tool for assessing how many samples are needed for an algorithm based on random sampling to work. Some of the most useful concentration inequalities are exponential concentration inequalities like Hoeffding and Bernstein, which show that an average of bounded random quantities are close to their average except with exponentially small probability.

Big Ideas in Applied Math: Low-rank Matrices

Let’s start our discussion of low-rank matrices with an application. Suppose that there are 1000 weather stations spread across the world, and we record the temperature during each of the 365 days in a year.1I borrow the idea for the weather example from Candes and Plan. If we were to store each of the temperature measurements individually, we would need to store 365,000 numbers. However, we have reasons to believe that significant compression is possible. Temperatures are correlated across space and time: If it’s hot in Arizona today, it’s likely it was warm in Utah yesterday.

If we are particularly bold, we might conjecture that the weather approximately experiences a sinusoidal variation over the course of the year:

(1)

For a station , denotes the average temperature of the station and denotes the maximum deviation above or below this station, signed so that it is warmer than average in the Northern hemisphere during June-August and colder-than-average in the Southern hemisphere during these months. The phase shift is chosen so the hottest (or coldest) day in the year occurs at the appropriate time. This model is clearly grossly inexact: The weather does not satisfy a simple sinusoidal model. However, we might plausibly expect it to be fairly informative. Further, we have massively compressed our data, only needing to store the numbers rather than our full data set of 365,000 temperature values.

Let us abstract this approximation procedure in a linear algebraic way. Let’s collect our weather data into a matrix with 1000 rows, one for each station, and 365 columns, one for each day of the year. The entry corresponding to station and day is the temperature at station on day . The approximation Eq. (1) corresponds to the matrix approximation

(2)

Let us call the matrix on the right-hand side of Eq. (2) for ease of discussion. When presented in this linear algebraic form, it’s less obvious in what way is simpler than , but we know from Eq. (1) and our previous discussion that is much more efficient to store than . This leads us naturally to the following question: Linear algebraically, in what way is simpler than ?

The answer is that the matrix has low rank. The rank of the matrix is whereas almost certainly possesses the maximum possible rank of . This example is suggestive that low-rank approximation, where we approximate a general matrix by one of much lower rank, could be a powerful tool. But there any many questions about how to use this tool and how widely applicable it is. How can we compress a low-rank matrix? Can we use this compressed matrix in computations? How good of a low-rank approximation can we find? What even is the rank of a matrix?

What is Rank?

Let’s do a quick review of the foundations of linear algebra. At the core of linear algebra is the notion of a linear combination. A linear combination of vectors is a weighted sum of the form , where are scalars2In our case, matrices will be comprised of real numbers, making scalars real numbers as well.. A collection of vectors is linearly independent if there is no linear combination of them which produces the zero vector, except for the trivial -weighted linear combination . If are not linearly independent, then they’re linearly dependent.

The column rank of a matrix is the size of the largest possible subset of ‘s columns which are linearly independent. So if the column rank of is , then there is some sub-collection of columns of which are linearly independent. There may be some different sub-collections of columns from that are linearly dependent, but every collection of columns is guaranteed to be linearly dependent. Similarly, the row rank is defined to be the maximum size of any linearly independent collection of rows taken from . A remarkable and surprising fact is that the column rank and row rank are equal. Because of this, we refer to the column rank and row rank simply as the rank; we denote the rank of a matrix by .

Linear algebra is famous for its multiple equivalent ways of phrasing the same underlying concept, so let’s mention one more way of thinking about the rank. Define the column space of a matrix to consist of the set of all linear combinations of its columns. A basis for the column space is a linear independent collection of elements of the column space of the largest possible size. Every element of the column space can be written uniquely as a linear combination of the elements in a basis. The size of a basis for the column space is called the dimension of the column space. With these last definitions in place, we note that the rank of is also equal to the dimension of the column space of . Likewise, if we define the row space of to consist of all linear combinations of ‘s rows, then the rank of is equal to the dimension of ‘s row space.

The upshot is that if a matrix has a small rank, its many columns (or rows) can be assembled as linear combinations from a much smaller collection of columns (or rows). It is this fact that allows a low-rank matrix to be compressed for algorithmically useful ends.

Rank Factorizations

Suppose we have an matrix which is of rank much smaller than both and . As we saw in the introduction, we expect that such a matrix can be compressed to be stored with many fewer than entries. How can this be done?

Let’s work backwards and start with the answer to this question and then see why it works. Here’s a fact: a matrix of rank can be factored as , where is an matrix and is an matrix. In other words, can be factored as a “thin” matrix with columns times a “fat” matrix with rows. We use the symbols and for these factors to stand for “left” and “right”; we emphasize that and are general and matrices, not necessarily possessing any additional structure.3Readers familiar with numerical linear algebra may instinctively want to assume that and are lower and upper triangular; we do not make this assumption. The fact that we write the second term in this factorization as a transposed matrix “” is unimportant: We adopt a convention where we write a fat matrix as the transpose of a thin matrix. This notational choice is convenient allows us to easily distinguish between thin and fat matrices in formulas; this choice of notation is far from universal. We call a factorization such as a rank factorization.4Other terms, such as full rank factorization or rank-revealing factorization, have been been used to describe the same concept. A warning is that the term “rank-revealing factorization” can also refer to a factorization which encodes a good low-rank approximation to rather than a genuine factorization of .

Rank factorizations are useful as we can compactly store by storing its factors and . This reduces the storage requirements of to numbers down from numbers. For example, if we store a rank factorization of the low-rank approximation from our weather example, we need only store 2,730 numbers rather than 365,000. In addition to compressing , we shall soon see that one can rapidly perform many calculations from the rank factorization without ever forming itself. For these reasons, whenever performing computations with a low-rank matrix, your first step should almost always be to express it using a rank factorization. From there, most computations can be done faster and using less storage.

Having hopefully convinced ourselves of the usefulness of rank factorizations, let us now convince ourselves that every rank- matrix does indeed possess a rank factorization where and have columns. As we recalled in the previous section, since has rank , there is a basis of ‘s column space consisting of vectors . Collect these vectors as columns of an matrix . But since the columns of comprise a basis of the column space of , every column of can be written as a linear combination of the columns of . For example, the th column of can be written as a linear combination , where we suggestively use the labels for the scalar multiples in our linear combination. Collecting these coefficients into a matrix with th entry , we have constructed a factorization . (Check this!)

This construction gives us a look at what a rank factorization is doing. The columns of comprise a basis for the column space of and the rows of comprise a basis for the row space of . Once we fix a “column basis” , the “row basis” is comprised of linear combination coefficients telling us how to assemble the columns of as linear combinations of the columns in .5It is worth noting here that a slightly more expansive definition of rank factorization has also proved useful. In the more general definition, a rank factorization is a factorization of the form where is , is , and is . With this definition, we can pick an arbitrary column basis and row basis . Then, there exists a unique nonsingular “middle” matrix such that . Note that this means there exist many different rank factorizations of a matrix since one may pick different column bases for .6This non-uniqueness means one should take care to compute a rank factorization which is as “nice” as possible (say, by making sure and are as well-conditioned as is possible). If one modifies a rank factorization during the course of an algorithm, one should take care to make sure that the rank factorization remains nice. (As an example of what can go wrong, “unbalancing” between the left and right factors in a rank factorization can lead to convergence problems for optimization problems.)

Now that we’ve convinced ourselves that every matrix indeed has a rank factorization, how do we compute them in practice? In fact, pretty much any matrix factorization will work. If you can think of a matrix factorization you’re familiar with (e.g., LU, QR, eigenvalue decomposition, singular value decomposition,…), you can almost certainly use it to compute a rank factorization. In addition, many dedicated methods have been developed for the specific purpose of computing rank factorizations which can have appealing properties which make them great for certain applications.

Let’s focus on one particular example of how a classic matrix factorization, the singular value decomposition, can be used to get a rank factorization. Recall that the singular value decomposition (SVD) of a (real) matrix is a factorization where and are an and (real) orthogonal matrices and is a (possibly rectangular) diagonal matrix with nonnegative, descending diagonal entries . These diagonal entries are referred to as the singular values of the matrix . From the definition of rank, we can see that the rank of a matrix is equal to its number of nonzero singular values. With this observation in hand, a rank factorization of can be obtained by letting be the first columns of and being the first rows of (note that the remaining rows of are zero).

Computing with Rank Factorizations

Now that we have a rank factorization in hand, what is it good for? A lot, in fact. We’ve already seen that one can store a low-rank matrix expressed as a rank factorization using only numbers, down from numbers by storing all of its entries. Similarly, if we want to compute the matrix-vector product for a vector of length , we can compute this product as . This reduces the operation count down from operations to operations using the rank factorization. As a general rule of thumb, when we have something expressed as a rank factorization, we can usually expect to reduce our operation count (and storage costs) from something proportional to (or worse) down to something proportional to .

Let’s try something more complicated. Say we want to compute an SVD of . In the previous section, we computed a rank factorization of using an SVD, but suppose now we computed in some other way. Our goal is to “upgrade” the general rank factorization into an SVD of . Computing the SVD of a general matrix requires operations (expressed in big O notation). Can we do better? Unfortunately, there’s a big roadblock for us: We need operations even to write down the matrices and , which already prevents us from achieving an operation count proportional to like we’re hoping for. Fortunately, in most applications, only the first columns of and are important. Thus, we can change our goal to compute a so-called economy SVD of , which is a factorization , where and are and matrices with orthonormal columns and is a diagonal matrix listing the nonzero singular values of in decreasing order.

Let’s see how to upgrade a rank factorization into an economy SVD . Let’s break our procedure into steps:

1. Compute (economy7The economy QR factorization of an thin matrix is a factorization where is an matrix with orthonormal columns and is a upper triangular matrix. The economy QR factorization is sometimes also called a thin or compact QR factorization, and can be computed in operations.) QR factorizations of and : and . Reader beware: We call the “” factor in the QR factorizations of and to be and , as we have already used the letter to denote the second factor in our rank factorization.
2. Compute the small matrix .
3. Compute an SVD of .
4. Set and .

By following the procedure line-by-line, one can check that indeed the matrices and have orthonormal columns and , so this procedure indeed computes an economy SVD of . Let’s see why this approach is also faster. Let’s count operations line-by-line:

1. Economy QR factorization of an and matrix require and operations.
2. The product of two matrices requires operations.
3. The SVD of an matrix requires operations.
4. The products of a and a matrix by matrices requires and operations.

Accounting for all the operations, we see the operation count is , a significant improvement over the operations for a general matrix.8We can ignore the term of order since so is .

As the previous examples show, many (if not most) things we want to compute from a low-rank matrix can be dramatically more efficiently computed using its rank factorization. The strategy is simple in principle, but can be subtle to execute: Whatever you do, avoid explicitly computing the product at all costs. Instead, compute with the matrices and directly, only operating on , , and matrices.

Another important type of computation one can perform with low-rank matrices are low-rank updates, where we have already solved a problem for a matrix and we want to re-solve it efficiently with the matrix where has low rank. If is expressed in a rank factorization, very often we can do this efficiently as well, as we discuss in the following bonus section. As this is somewhat more niche, the uninterested reader should feel free to skip this and continue to the next section.

Suppose we’ve solved a system of linear equations by computing an LU factorization of the matrix . We now wish to solve the system of linear equations , where is a low-rank matrix expressed as a rank factorization . Our goal is to do this without recomputing a new factorization from scratch.

The first solution uses the Sherman-Morrison-Woodbury formula, which has a nice proof via the Schur complement and block Gaussian elimination which I described here. In our case, the formula yields

(3)

where and denote the and identity matrices. This formula can easily verified by multiplying with and confirming one indeed recovers the identity matrix. This formula suggests the following approach to solving . First, use our already-computed LU factorization for to compute . (This involves solving linear systems of the form to compute each column of from each column of .) We then compute an LU factorization of the much smaller matrix . Finally, we use our factorization of once more to compute , from which our solution is given by

(4)

The net result is we solved our rank--updated linear system using solutions of the original linear system with no need to recompute any factorizations of matrices. We’ve reduced the solution of the system to an operation count of which is dramatically better than the operation count of recomputing the LU factorization from scratch.

This simple example demonstrates a broader pattern: Usually if a matrix problem took to solve originally, one can usually solve the problem after a rank- update in an additional time of only something like operations.9Sometimes, this goal of can be overly optimistic. For symmetric eigenvalue problems, for instance, the operation count may be a bit larger by a (poly)logarithmic factor—say something like . An operation count like this still represents a dramatic improvement over the operation count of recomputing by scratch. For instance, not only can we solve rank--updated linear systems in operations, but we can actually update the LU factorization itself in operations. Similar updates exist for Cholesky, QR, symmetric eigenvalue, and singular value decompositions to update these factorizations in operations.

An important caveat is that, as always with linear algebraic computations, it’s important to read the fine print. There are many algorithms for computing low-rank updates to different matrix factorizations with dramatically different accuracy properties. Just because in principle rank-updated versions of these factorizations can be computed doesn’t mean it’s always advisable. With this qualification stated, these ways of updating matrix computations with low-rank updates can be a powerful tool in practice and reinforce the computational benefits of low-rank matrices expressed via rank factorizations.

Low-rank Approximation

As we’ve seen, computing with low-rank matrices expressed as rank factorizations can yield significant computational savings. Unfortunately, many matrices in application are not low-rank. In fact, even if a matrix in an application is low-rank, the small rounding errors we incur in storing it on a computer may destroy the matrix’s low rank, increasing its rank to the maximum possible value of . The solution in this case is straightforward: approximate our high-rank matrix with a low-rank one, which we express in algorithmically useful form as a rank factorization.

Here’s one simple way of constructing low-rank approximations. Start with a matrix and compute a singular value decomposition of , . Recall from two sections previous that the rank of the matrix is equal to its number of nonzero singular values. But what if ‘s singular values aren’t exactly zero, but they’re very small? It seems reasonable to expect that is nearly low-rank in this case. Indeed, this intuition is true. To approximate a low-rank matrix, we can truncate ‘s singular value decomposition by setting ‘s small singular values to zero. If we zero out all but the largest singular values of , this procedure results in a rank- matrix which approximates . If the singular values that we zeroed out were tiny, then will be very close to and the low-rank approximation is accurate. This matrix is called an -truncated singular value decomposition of , and it is easy to represent it using a rank factorization once we have already computed an SVD of .

It is important to remember that low-rank approximations are, just as the name says, approximations. Not every matrix is well-approximated by one of small rank. A matrix may be excellently approximated by a rank-100 matrix and horribly approximated by a rank-90 matrix. If an algorithm uses a low-rank approximation as a building block, then the approximation error (the difference between and its low-rank approximation ) and its propagations through further steps of the algorithm need to be analyzed and controlled along with other sources of error in the procedure.

Despite this caveat, low-rank approximations can be startlingly effective. Many matrices occurring in practice can be approximated to negligible error by a matrix with very modestly-sized rank. We shall return to this surprising ubiquity of approximately low-rank matrices at the end of the article.

We’ve seen one method for computing low-rank approximations, the truncated singular value decomposition. As we shall see in the next section, the truncated singular value decomposition produces excellent low-rank approximations, the best possible in a certain sense, in fact. As we mentioned above, almost every matrix factorization can be used to compute rank factorizations. Can these matrix factorizations also compute high quality low-rank approximations?

Let’s consider a specific example to see the underlying ideas. Say we want to compute a low-rank approximation to a matrix by a QR factorization. To do this, we want to compute a QR factorization and then throw away all but the first columns of and the first rows of . This will be a good approximation if the rows we discard from are “small” compared to the rows of we keep. Unfortunately, this is not always the case. As a worst case example, if the first columns of are zero, then the first rows of will definitely be zero and the low-rank approximation computed this way is worthless.

We need to modify something to give QR factorization a fighting chance for computing good low-rank approximations. The simplest way to do this is by using column pivoting, where we shuffle the columns of around to bring columns of the largest size “to the front of the line” as we computing the QR factorization. QR factorization with column pivoting produces excellent low-rank approximations in a large number of cases, but it can still give poor-quality approximations for some special examples. For this reason, numerical analysts have developed so-called strong rank-revealing QR factorizations, such as the one developed by Gu and Eisenstat, which are guaranteed to compute quite good low-rank approximations for every matrix . Similarly, there exists a strong rank-revealing LU factorizations which can compute good low-rank approximations using LU factorization.

The upshot is that most matrix factorizations you know and love can be used to compute good-quality low-rank approximations, possibly requiring extra tricks like row or column pivoting. But this simple summary, and the previous discussion, leaves open important questions: what do we mean by good-quality low-rank approximations? How good can a low-rank approximation be?

Best Low-rank Approximation

As we saw in the last section, one way to approximate a matrix by a lower rank matrix is by a truncated singular value decomposition. In fact, in some sense, this is the best way of approximating a matrix by one of lower rank. This fact is encapsulated in a theorem commonly referred to as the Eckart–Young theorem, though the essence of the result is originally due to Schmidt and the modern version of the result to Mirsky.10A nice history of the Eckart–Young theorem is provided in the book Matrix Perturbation Theory by Stewart and Sun.

But what do we mean by best approximation? One ingredient we need is a way of measuring how big the discrepancy between two matrices is. Let’s define a measure of the size of a matrix which we will call ‘s norm, which we denote as . If is a matrix and is a low-rank approximation to it, then is a good approximation to if the norm is small. There might be many different ways of measuring the size of the error, but we have to insist on a couple of properties on our norm for it to really define a sensible measure of size. For instance if the norm of a matrix is , then the norm of should be . A list of the properties we require a norm to have are listed on the Wikipedia page for norms. We shall also insist on one more property for our norm: the norm should be unitarily invariant.11Note that every unitarily invariant norm is a special type of vector norm (called a symmetric gauge function) evaluated on the singular values of the matrix. What this means is the norm of a matrix remains the same if it is multiplied on the left or right by an orthogonal matrix. This property is reasonable since multiplication by orthogonal matrices geometrically represents a rotation or reflection12This is not true in dimensions higher than 2, but it gives the right intuition that orthogonal matrices preserve distances. which preserves distances between points, so it makes sense that we should demand that the size of a matrix as measured by our norm does not change by such multiplications. Two important and popular matrix norms satisfy the unitarily invariant property: the Frobenius norm and the spectral (or operator 2-) norm , which measures the largest singular value.13Both the Frobenius and spectral norms are examples of an important subclass of unitarily invariant norms called Schatten norms. Another example of a Schatten norm, important in matrix completion, is the nuclear norm (sum of the singular values).

With this preliminary out of the way, the Eckart–Young theorem states that the truncated singular value decomposition of truncated to rank is the closest of all rank- matrices when distances are measured using any unitarily invariant norm . If we let denote the -truncated singular value decomposition of , then the Eckart–Young theorem states that

(5)

Less precisely, the -truncated singular value decomposition is the best rank- approximation to a matrix.

Let’s unpack the Eckart–Young theorem using the spectral and Frobenius norms. In this context, a brief calculation and the Eckart–Young theorem proves that for any rank- matrix , we have

(6)

where are the singular values of . This bound is quite intuitive. The error in low-rank approximation will be “small” when we measure the error in the spectral norm when each singular value we zero out is “small”. When we measure error in the Frobenius norm, the error in low-rank approximation is “small” when all of the singular values we zero out are “small” in aggregate when squared and added together.

The Eckart–Young theorem shows that possessing a good low-rank approximation is equivalent to the singular values rapidly decaying.14At least when measured in unitarily invariant norms. A surprising result shows that even the identity matrix, whose singular values are all equal to one, has good low-rank approximations in the maximum entrywise absolute value norm; see, e.g., Theorem 1.0 in this article. If a matrix does not have nice singular value decay, no good low-rank approximation exists, computed by the -truncated SVD or otherwise.

Why Are So Many Matrices (Approximately) Low-rank?

As we’ve seen, we can perform computations with low-rank matrices represented using rank factorizations much faster than general matrices. But all of this would be a moot point if low-rank matrices rarely occurred in practice. But in fact precisely the opposite is true: Approximately low-rank matrices occur all the time in practice.

Sometimes, exact low-rank matrices appear for algebraic reasons. For instance, when we perform one step Gaussian elimination to compute an factorization, the lower right portion of the eliminated matrix, the so-called Schur complement, is a rank-one update to the original matrix. In such cases, a rank- matrix might appear in a computation when one performs steps of some algebraic process: The appearance of low-rank matrices in such cases is unsurprising.

However, often, matrices appearing in applications are (approximately) low-rank for analytic reasons instead. Consider the weather example from the start again. One might reasonably model the temperature on Earth as a smooth function of position and time . If we then let denote the position on Earth of station and the time representing the th day of a given year, then the entries of the matrix are given by . As discussed in my article on smoothness and degree of approximation, a smooth function function of one variable can be excellently approximated by, say, a polynomial of low degree. Analogously, a smooth function depending on two arguments, such as our function , can be excellently be approximated by a separable expansion of rank :

(7)

Similar to functions of a single variable, the degree to which a function can to be approximated by a separable function of small rank depends on the degree smoothness of the function . Assuming the function is quite smooth, then can be approximated has a separable expansion of small rank . This leads immediately to a low-rank approximation to the matrix given by the rank factorization

(8)

Thus, in the context of our weather example, we see that the data matrix can be expected to be low-rank under the reasonable-sounding assumption that the temperature depends smoothly on space and time.

What does this mean in general? Let’s speak informally. Suppose that the th entries of a matrix are samples from a smooth function for points and . Then we can expect that will be approximately low-rank. From a computational point of view, we don’t need to know a separable expansion for the function or even the form of the function itself: If the smooth function exists and is sampled from it, then is approximately low-rank and we can find a low-rank approximation for using the truncated singular value decomposition.15Note here an important subtlety. A more technically precise version of what we’ve stated here is that: if depending on inputs and is sufficiently smooth for in the product of compact regions and , then an matrix with and will be low-rank in the sense that it can be approximated to accuracy by a rank- matrix where grows slowly as and increase and decreases. Note that, phrased this way, the low-rank property of is asymptotic in the size and and the accuracy . If is not smooth on the entirety of the domain or the size of the domains and grow with and , these asymptotic results may no longer hold. And if and are small enough or is large enough, may not be well approximated by a matrix of small rank. Only when there are enough rows and columns will meaningful savings from low-rank approximation be possible.

This “smooth function” explanation for the prevalence of low-rank matrices is the reason for the appearance of low-rank matrices in fast multipole method-type fast algorithms in computational physics and has been proposed16This article considers piecewise analytic functions rather than smooth functions; the principle is more-or-less the same. as a general explanation for the prevalence of low-rank matrices in data science.

(Another explanation for low-rank structure for highly structured matrices like Hankel, Toeplitz, and Cauchy matrices17Computations with these matrices can often also be accelerated with other approaches than low-rank structure; see my post on the fast Fourier transform for a discussion of fast Toeplitz matrix-vector products. which appear in control theory applications has a different explanation involving a certain Sylvester equation; see this lecture for a great explanation.)

Upshot: A matrix is low-rank if it has many fewer linearly independent columns than columns. Such matrices can be efficiently represented using rank-factorizations, which can be used to perform various computations rapidly. Many matrices appearing in applications which are not genuinely low-rank can be well-approximated by low-rank matrices; the best possible such approximation is given by the truncated singular value decomposition. The prevalence of low-rank matrices in diverse application areas can partially be explained by noting that matrices sampled from smooth functions are approximately low-rank.

Big Ideas in Applied Math: The Fast Fourier Transform

The famous law of the instrument states that “when all you have is a hammer, every problem looks like a nail.” In general, this tendency is undesirable: most problems in life are not nails and could better be addressed by a more appropriate tool. However, one can also review the law of the instrument in a more positive framing: when presented with a powerful new tool, it is worth checking how many problems it can solve. The fast Fourier transform (FFT) is one of the most important hammers in an applied mathematician’s toolkit. And it has made many seemingly unrelated problems look like nails.

1. What is the FFT—what problem is it solving and how does it solve it fast?
2. How can the ideas behind the FFT be used to solve other problems?
3. How can the FFT be used as a building block in solving a seemingly unrelated problem?

The FFT is widely considered one of the most important numerical algorithms, and as such every sub-community of applied mathematics is inclined to see the most interesting applications of the FFT as those in their particular area. I am unapologetically victim to this tendency myself, and thus will discuss an application of the FFT that I find particularly beautiful and surprising. In particular, this article won’t focus on the manifold applications of the FFT in signal processing, which I think has been far better covered by authors more familiar with that field.

The Discrete Fourier Transform

At its core, the FFT is a fast algorithm to compute complex numbers given real or complex numbers defined by the formula1The factor of is not universal. It is common to omit the factor in (1) and replace the in Eq. (2) with a . We prefer this convention as it makes the DFT a unitary transformation. When working with Fourier analysis, it is important to choose formulas for the (discrete) Fourier transform and the inverse (discrete) Fourier transform which form a pair in the sense they are inverses of each other.

(1)

The outputs is called the discrete Fourier transform (DFT) of . The FFT is just one possible algorithm to evaluate the DFT.

The DFT has the following interpretation. Suppose that is a periodic function defined on the integers with period —that is, for every integer . Choose to be the values of given by for . Then, in fact, gives an expression for as a so-called trigonometric polynomial2The name “trigonometric polynomial” is motivated by Euler’s formula which shows that , so indeed the right-hand side of Eq. (2) is indeed a “polynomial” in the “variables” and :

(2)

This shows that (1) converts function values of a periodic function to coefficients of a trigonometric polynomial representation of , which can be called the Fourier series of . Eq. (2), referred to as the inverse discrete Fourier transform, inverts this, converting coefficients to function values .

Fourier series are an immensely powerful tool in applied mathematics. For example again, if represents a sound wave produced by a chord on a piano, its Fourier coefficients represents the intensity of each pitch comprising the chord. An audio engineer could, for example, compute a Fourier series for a piece of music and zero out Fourier coefficients, thus reducing the amount of data needed to store a piece of music. This idea is indeed part of the way audio compression standards like MP3 work. In addition to many more related applications in signal processing, the Fourier series is also a natural way to solve differential equations, either by pencil and paper or by computer via so-called Fourier spectral methods. As these applications (and more to follow) show, the DFT is a very useful computation to perform. The FFT allows us to perform this calculation fast.

The Fast Fourier Transform

The first observation to make is that Eq. (1) is a linear transformation: if we think of Eq. (1) as describing a transformation , then we have that . Recall the crucial fact from linear algebra that every linear transformation can be represented by a matrix-vector muliplication.3At least in finite dimensions; the story for infinite-dimensional vector spaces is more complicated. In my experience, one of the most effective algorithm design strategies in applied mathematics is, when presented with a linear transformation, to write its matrix down and poke and prod it to see if there are any patterns in the numbers which can be exploited to give a fast algorithm. Let’s try to do this with the DFT.

We have that for some matrix . (We will omit the subscript when its value isn’t particularly important to the discussion.) Let us make the somewhat non-standard choice of describing rows and columns of by zero-indexing, so that the first row of is row and the last is row . Then we have that . Comparing with Eq. (1), we see that . Let us define . Thus, we can write the matrix out as

(3)

This is a highly structured matrix. The patterns in this matrix are more easily seen for a particular value of . We shall focus on in this discussion, but what will follow will generalize in a straightforward way to any power of two (and in less straightforward ways to arbitrary —we will return to this point at the end).

Instantiating Eq. (3) with (and writing ), we have

(4)

To fully exploit the patterns in this matrix, we note that represents a clockwise rotation of the complex plane by an eighth of the way around the circle. So, for example is twenty-one eighths of a turn or simply just turns. Thus and more generally . This allows us to simplify as follows:

(5)

Now notice that, since represents a clockwise rotation of an eighth of the way around the circle, represents a quarter turn of the circle. This fact leads to the surprising observation we can actually find the DFT matrix for hidden inside the DFT matrix for !

To see this, rearrange the columns of to interleave every other column. In matrix language this is represented by right-multiplying with an appropriate4In fact, this permutation has a special name: the perfect shuffle. permutation matrix :

(6)

The top-left sub-block is precisely (up to scaling). In fact, defining the diagonal matrix (called the twiddle factor) and noting that , we have

(7)

The matrix is entirely built up of simple scalings of the smaller DFT matrix ! This suggests the following decomposition to compute :

(8)

Here represent the even-indexed entries of and the odd-indexed entries. Thus, we see that we can evaluate by evaluating the two expressions and . We have broken our problem into two smaller problems, which we then recombine into a solution of our original problem.

How then, do we compute the smaller DFTs and ? We just use the same trick again, breaking, for example, the product into further subcomputations and . Performing this process one more time, we need to evaluate expressions of the form , which are simply given by since the matrix is just a matrix whose single entry is .

This procedure is an example of a recursive algorithm: we designed an algorithm which solves a problem by breaking it down into one or more smaller problems, solve each of the smaller problems by using this same algorithm, and then reassemble the solutions of the smaller problems to solve our original problem. Eventually, we will break our problems into such small pieces that they can be solved directly, which is referred to as the base case of our recursion. (In our case, the base case is multiplication by ). Algorithms using this recursion in this way are referred to as divide-and-conquer algorithms.

Let us summarize this recursive procedure we’ve developed. We want to compute the DFT where is a power of two. First, we use the DFT to recursively compute and . Next, we combine these computations to evaluate by the formula

(9)

This procedure is the famous fast Fourier transform (FFT), whose modern incarnation was presented by Cooley and Tukey in 1965 with lineage that can be traced back to work by Gauss in the early 1800s. There are many variants of the FFT using similar ideas.

Let us see why the FFT is considered “fast” by analyzing its operation count. As is common for divide-and-conquer algorithms, the number of operations for computing using the FFT can be determined by solving a certain recurrence relation. Let be the number of operations required by the FFT. Then the cost of computing consists of

• proportional-to- operations (or operations, in computer science language5 refers to big-O notation. Saying an algorithm takes operations is stating that, more or less, the algorithm takes less than some multiple of operations to complete.) to:
• add, subtract, and scale vectors and
• multiply by the diagonal matrix and
• two recursive computations of for , each of which requires operations.

This gives us the recurrence relation

(10)

Solving recurrences is a delicate art in general, but a wide class of recurrences are immediately solved by the flexible master theorem for recurrences. Appealing to this result, we deduce that the FFT requires operations. This is a dramatic improvement of the operations to compute directly using Eq. (1). This dramatic improvement in speed is what makes the FFT “fast”.

Extending the FFT Idea

The FFT is a brilliant algorithm. It exploits the structure of the discrete Fourier transform problem Eq. (1) for dramatically lower operation counts. And as we shall see a taste of, the FFT is useful in a surprisingly broad range of applications. Given the success of the FFT, we are naturally led to the question: can we learn from our success with the FFT to develop fast algorithms for other problems?

I think the FFT speaks to the power of a simple problem-solving strategy for numerical algorithm design6As mentioned earlier, the FFT also exemplifies a typical design pattern in general (not necessarily numerical) algorithm design, the divide-and-conquer strategy. In the divide-and-conquer strategy, find a clever way of dividing a problem into multiple subproblems, conquering (solving) each, and then recombining the solutions to the subproblems into a solution of the larger problem. The challenge with such problems is often finding a way of doing the recombination step, which usually relies on some clever insight. Other instances of divide-and-conquer algorithms include merge sort and Karatsuba’s integer multiplication algorithm.: whenever you have a linear transformation, write it as a matrix-vector product; whenever you have a matrix, write it down and see if there are any patterns.7Often, rearranging the matrix will be necessary to see any patterns. We often like to present mathematics with each step of a derivation follows almost effortlessly from the last from a firm basis of elegant mathematical intuition. Often, however, noticing patterns by staring at symbols on a page can be more effective than reasoning grounded in intuition. Once the pattern has been discovered, intuition and elegance sometimes will follow quickly behind.

The most natural generalization of the FFT is the fast inverse discrete Fourier transform, providing a fast algorithm to compute the inverse discrete Fourier transform Eq. (2). The inverse FFT is quite an easy generalization of the FFT presented in the past section; it is a good exercise to see if you can mimic the development in the previous section to come up with this generalization yourself. The FFT can also be generalized to other discrete trigonometric transforms and 2D and 3D discrete Fourier transforms.

I want to consider a problem more tangentially related to the FFT, the evaluation of expressions of the form , where is an matrix, is an matrix, is a vector of length , and denotes the Kronecker product. For the unitiated, the Kronecker product of and is a matrix defined as the block matrix

(11)

We could just form this matrix and compute the matrix-vector product directly, but this takes a hefty operations.8Equally or perhaps more problematically, this also takes space. We can do better.

The insight is much the same as with the FFT: scaled copies of the matrix are embedded in . In the FFT, we needed to rearrange the columns of the DFT matrix to see this; for the Kronecker product, this pattern is evident in the natural ordering. To exploit this fact, chunk the vectors and into pieces and of length and respectively so that our matrix vector product can be written as9This way of writing this expression is referred to as a conformal partitioning to indicate that one can multiply the block matrices using the ordinary matrix product formula treating the block entries as if they were simple numbers.

(12)

To compute this product efficiently, we proceed in two steps. First, we compute the products which takes time in total. Next, we compute each component by using the formula

(13)

which takes a total of operations to compute all the ‘s. This leads to a total operation count of for computing the matrix-vector product , much better than our earlier operation count of .10There is another way of interpreting this algorithm. If we interpret and as the vectorization of and matrices and , then we have . The algorithm we presented is equivalent to evaluating this matrix triple product in the order . This shows that this algorithm could be further accelerated using Strassenstyle fast matrix multiplication algorithms.

While this idea might seem quite far from the FFT, if one applies this idea iteratively, one can use this approach to rapidly evaluate a close cousin of the DFT called the Hadamard-Walsh transform. Using the Kronecker product, the Hadamard-Walsh transform of a vector is defined to be

(14)

If one applies the Kronecker product trick we developed repeatedly, this gives an algorithm to evaluate the Hadamard-Walsh transform of a vector of length in operations, just like the FFT.

The Hadamard-Walsh transform can be thought of as a generalization of the discrete Fourier transform to Boolean functions, which play an integral role in computer science. The applications of the Hadamard-Walsh transform are numerous and varied, from everything to voting systems to quantum computing. This is really just the tip of the iceberg. The ideas behind the FFT (and related ideas from the fast multipole method) allow for the rapid evaluation of a large number of transformations, some of which are connected by deep and general theories.

Resisting the temptation to delve into these interesting subjects in any more depth, we return to our main idea: when presented with a linear transformation, write it as a matrix-vector product; whenever you have a matrix, write it down and see if there are any patterns. The FFT exploits one such pattern, noticing that (after a reordering) a matrix contains many scaled copies of the same matrix. Rapidly evaluation expressions of the form involves an even simpler application of the same idea. But there are many other patterns that can be exploited: sparsity, (approximate) low rank, off-diagonal blocks approximately of low rank, and displacement structure are other examples. Very often in applied math, our problems have additional structure that can be exploited to solve problems much faster, and sometimes finding that structure is as easy as just trying to look for it.

An Application of the FFT

A discussion of the FFT would be incomplete without exploring at least one reason why you’d want to compute the discrete Fourier transform. To focus our attention, let us consider another linear algebraic calculation which appears to have no relation to the FFT on its face: computing a matrix-vector product with a Toeplitz matrix. A matrix is said to be Toeplitz if it has the following structure:

(15)

Toeplitz matrices and their relatives appear widely across applications of applied mathematics including control and systems theory, time series, numerical partial differential equations, and signal processing.

We seek to compute the matrix-vector product . Let us by considering a special case of a Toeplitz matrix, a circulant matrix. A circulant matrix has the form

(16)

By direct computation, the matrix-vector product is given by

(17)

A surprising and non-obvious fact is that the circulant matrix is diagonalized by the discrete Fourier transform. Specifically, we have where . This gives a fast algorithm to compute in time : compute the DFTs of and and multiply them together entrywise, take the inverse Fourier transform, and scale by .

There is a connection with signal processing and differential equations that may help to shed light on why technique works for those familiar with those areas. In the signal processing context, the matrix-vector product can be interpreted as the discrete convolution of with (see Eq. (17)) which is a natural extension of the convolution of two functions and on the real line. It is an important fact that the Fourier transform of a convolution is the same as multiplication of the Fourier transforms: (up to a possible normalizing constant).11A related identity also holds for the Laplace transform. The fact that the DFT diagonalizes a circulant matrix is just the analog of this fact for the discrete Fourier transform and the discrete convolution.

This fast algorithm for circulant matrix-vector products is already extremely useful. One can naturally reframe the problems of multiplying integers and polynomials as discrete convolutions, which can then be computed rapidly by applying the algorithm for fast circulant matrix-vector products. This video gives a great introduction to the FFT with this as its motivating application.

Let’s summarize where we’re at. We are interested in computing the Toeplitz matrix-vector product . We don’t know how to do this for a general Toeplitz matrix yet, but we can do it for a special Toeplitz matrix called a circulant matrix . By use of the FFT, we can compute the circulant matrix-vector product in operations.

We can now leverage what we’ve done with circulant matrices to accelerate Toeplitz matrix-vector product. The trick is very simple: embedding. We construct a big circulant matrix which contains the Toeplitz matrix as a sub-matrix and then use multiplications by the bigger matrix to compute multiplications by the smaller matrix.

Consider the following circulant matrix, which contains as as defined in Eq. (15) a sub-matrix in its top-left corner:

(18)

This matrix is hard to write out, but essentially we pad the Toeplitz matrix with extra zeros to embed it into a circulant matrix. The “” vector for this larger circulant matrix is obtained from the parameters of the Toeplitz matrix Eq. (15) by .

Here comes another clever observation: we can choose the number of padding zeros used cleverly to make the size of exactly equal to a power of two. This is useful because it allows us to compute matrix-vector products with the power-of-two FFT described above, which we know is fast.

Finally, let’s close the loop and use fast multiplications with to compute fast multiplications with . We wish to compute the product fast. To do this, vector into a larger vector by padding with zeros to get

(19)

where we use to denote matrix or vector entries which are immaterial to us. We compute by using our fast algorithm to compute and then discarding everything but the first entries of to obtain . If you’re careful to analyze how much padding we need to make this work, we see that this algorithm also takes only operations. Thus, we’ve completed our goal: we can compute Toeplitz matrix-vector products in a fast operations.

Finally, let us bring this full circle and see a delightfully self-referential use of this algorithm: we can use the FFT-accelerated fast Toeplitz matrix-vector multiply to compute DFT itself. Recall that the FFT algorithm we presented above was particularized to which were powers of . There are natural generalizations of the along the lines of what we did above to more general which are highly composite and possess many small prime factors. But what if we want to evaluate the DFT for which is a large prime?

Recall that the DFT matrix has th entry . We now employ a clever trick. Let be a diagonal matrix with the th entry equal to . Then, defining , we have that , which means is a Toeplitz matrix! (Writing out the matrix entrywise may be helpful to see this.)

Thus, we can compute the DFT for any size by evaluating the DFT as , where the product is computed using the fast Toeplitz matrix-vector product. Since our fast Toeplitz matrix-vector product only requires us to evaluate power-of-two DFTs, this technique allows us to evaluate DFTs of arbitrary size in only operations.

Upshot: The discrete Fourier transform (DFT) is an important computation which occurs all across applied mathematics. The fast Fourier transform (FFT) reduces the operation count of evaluating the DFT of a vector of length to proportional to , down from proportional to for direct evaluation. The FFT is an example of a broader matrix algorithm design strategy of looking for patterns in the numbers in a matrix and exploiting these patterns to reduce computation. The FFT can often have surprising applications, such as allowing for rapid computations with Toeplitz matrices.

Big Ideas in Applied Math: Galerkin Approximation

My first experience with the numerical solution of partial differential equations (PDEs) was with finite difference methods. I found finite difference methods to be somewhat fiddly: it is quite an exercise in patience to, for example, work out the appropriate fifth-order finite difference approximation to a second order differential operator on an irregularly spaced grid and even more of a pain to prove that the scheme is convergent. I found that I liked the finite element method a lot better1Finite element methods certainly have their own fiddly-nesses (as anyone who has worked with a serious finite element code can no doubt attest to). as there was a unifying underlying functional analytic theory, Galerkin approximation, which showed how, in a sense, the finite element method computed the best possible approximate solution to the PDE among a family of potential solutions. However, I came to feel later that Galerkin approximation was, in a sense, the more fundamental concept, with the finite element method being one particular instantiation (with spectral methods, boundary element methods, and the conjugate gradient method being others). In this post, I hope to give a general introduction to Galerkin approximation as computing the best possible approximate solution to a problem within a certain finite-dimensional space of possibilities.

Systems of Linear Equations

Let us begin with a linear algebraic example, which is unburdened by some of the technicalities of partial differential equations. Suppose we want to solve a very large system of linear equations , where the matrix is symmetric and positive definite (SPD). Suppose that is where is so large that we don’t even want to store all components of the solution on our computer. What can we possibly do?

One solution is to consider only solutions lying in a subspace of the set of all possible solutions . If this subspace has a basis , then the solution can be represented as and one only has to store the numbers . In general, will not belong to the subspace and we must settle for an approximate solution .

The next step is to convert the system of linear equations into a form which is more amenable to approximate solution on a subspace . Note that the equation encodes different linear equations where is the th row of and is the th element of . Note that the th equation is equivalent to the condition , where is the vector with zeros in all entries except for the th entry which is a one. More generally, by multiplying the equation by an arbitrary test row vector , we get for all . We refer to this as a variational formulation of the linear system of equations . In fact, one can easily show that the variational problem is equivalent to the system of linear equations:

(1)

Since we are seeking an approximate solution from the subspace , it is only natural that we also restrict our test vectors to lie in the subspace . Thus, we seek an approximate solution to the system of equations as the solution of the variational problem

(2)

One can relatively easily show this problem possesses a unique solution .2Here is a linear algebraic proof. As we shall see below, the same conclusion will also follow from the general Lax-Milgram theorem. Let be a matrix whose columns form a basis for . Then every can be written as for some . Thus, writing , we have that for every . But this is just a variational formulation of the equation . The matrix is SPD since for since is SPD. Thus has a unique solution . Thus is the unique solution to the variational problem Eq. (2). In what sense is a good approximate solution for ? To answer this question, we need to introduce a special way of measuring the error to an approximate solution to . We define the -inner product of a vector and to be and the associated -norm .3Note that -norm can be seen as a weighted Euclidean norm, where the components of the vector in the direction of the eigenvectors of are scaled by their corresponding eigenvector. Concretely, if where is an eigenvector of with eigenvalue (), then we have . All of the properties satisfied by the familiar Euclidean inner product and norm carry over to the new -inner product and norm (e.g., the Pythagorean theorem). Indeed, for those familiar, one can show satisfies all the axioms for an inner product space.

We shall now show that the error between and its Galerkin approximation is -orthogonal to the space in the sense that for all . This follows from the straightforward calculation, for ,

(3)

where since solves the variational problem Eq. (1) and since solves the variational problem Eq. (2).

The fact that the error is -orthogonal to can be used to show that is, in a sense, the best approximate solution to in the subspace . First note that, for any approximate solution to , the vector is -orthogonal to . Thus, by the Pythagorean theorem,

(4)

Thus, the Galerkin approximation is the best approximate solution to in the subspace with respect to the -norm, for every . Thus, if one picks a subspace for which the solution almost lies in 4In the sense that is small then will be a good approximate solution to , irrespective of the size of the subspace .

Variational Formulations of Differential Equations

As I hope I’ve conveyed in the previous section, Galerkin approximation is not a technique that only works for finite element methods or even just PDEs. However, differential and integral equations are one of the most important applications of Galerkin approximation since the space of all possible solution to a differential or integral equation is infinite-dimensional: approximation in a finite-dimensional space is absolutely critical. In this section, I want to give a brief introduction to how one can develop variational formulations of differential equations amenable to Galerkin approximation. For simplicity of presentation, I shall focus on a one-dimensional problem which is described by an ordinary differential equation (ODE) boundary value problem. All of this generalized wholesale to partial differential equations in multiple dimensions, though there are some additional technical and notational difficulties (some of which I will address in footnotes). Variational formulation of differential equations is a topic with important technical subtleties which I will end up brushing past. Rigorous references are Chapters 5 and 6 from Evans’ Partial Differential Equations or Chapters 0-2 from Brenner and Scott’s The Mathematical Theory of Finite Element Methods.

As our model problem for which we seek a variational formulation, we will focus on the one-dimensional Poisson equation, which appears in the study of electrostatics, gravitation, diffusion, heat flow, and fluid mechanics. The unknown is a real-valued function on an interval which take to be .5In higher dimensions, one can consider an arbitrary domain with, for example, a Lipschitz boundary. We assume Dirichlet boundary conditions that is equal to zero on the boundary .6In higher dimensions, one has on the boundary of the region . Poisson’s equations then reads7 on and for higher dimensions, where is the Laplacian operator.

(5)

We wish to develop a variational formulation of this differential equation, similar to how we develop a variational formulation of the linear system of equations in the previous section. To develop our variational formulation, we take inspiration from physics. If represents, say, the temperature at a point , we are never able to measure exactly. Rather, we can measure the temperature in a region around with a thermometer. No matter how carefully we engineer our thermometer, our thermometer tip will have some volume occupying a region in space. The temperature measured by our thermometer will be the average temperature in the region or, more generally, a weighted average where is a weighting function which is zero outside the region . Now let’s use our thermometer to “measure” our differential equation:

(6)

This integral expression is some kind of variational formulation of our differential equation, as it is an equation involving the solution to our differential equation which must hold for every averaging function . (The precise meaning of every will be forthcoming.) It will benefit us greatly to make this expression more “symmetric” with respect to and . To do this, we shall integrate by parts:8Integrating by parts is harder in higher dimensions. My personal advice for integrating by parts in higher dimensions is to remember that integration by parts is ultimately a result of the product rule. As such, to integrate by parts, we first write an expression involving our integrand using the product rule of some differential operator and then integrate by both sides. In this case, notice that . Rearranging and integrating, we see that . We then apply the divergence theorem to the last term to get , where represents an outward facing unit normal to the boundary and represents integration on the surface . If is zero on , we conclude for all nice functions on satisfying on .

(7)

In particular, if is zero on the boundary , then the second two terms vanish and we’re left with the variational equation

(8)

Compare the variational formulation of the Poisson equation Eq. (8) to the variational formulation of the system of linear equations in Eq. (1). The solution vector in the differential equation context is a function satisfying the boundary condition of being zero on the boundary . The right-hand side is replaced by a function on the interval . The test vector is replaced by a test function on the interval . The matrix product expression is replaced by the integral . The product is replaced by the integral . As we shall soon see, there is a unifying theory which treats both of these contexts simultaneously.

Before this unifying theory, we must address the question of which functions we will consider in our variational formulation. One can show that all of the calculations we did in this section hold if is a continuously differentiable function on which is zero away from the endpoints and and is a twice continuously differentiable function on . Because of technical functional analytic considerations, we shall actually want to expand the class of functions in our variational formulation to even more functions . Specifically, we shall consider all functions which are (A) square-integrable ( is finite), (B) possess a square integrable derivative9More specifically, we only insist that possess a square-integrable weak derivative. ( is finite), and (C) are zero on the boundary. We refer to this class of functions as the Sobolev space .10The class of functions satisfying (A) and (B) but not necessarily (C) is the Sobolev space . For an arbitrary function in , the existence of a well-defined restriction to the boundary and is actually nontrivial to show, requiring showing the existence of a trace operator. Chapter 5 of Evan’s Partial Differential Equations is a good introduction to Sobolev spaces. The Sobolev spaces and naturally extend to spaces and for an arbitrary domain with a nice boundary.

Now this is where things get really strange. Note that it is possible for a function to satisfy the variational formulation Eq. (8) but for not to satisfy the Poisson equation Eq. (5). A simple example is when possesses a discontinuity (say, for example, a step discontinuity where is and then jumps to ). Then no continuously differentiable will satisfy Eq. (5) at every point in and yet a solution to the variational problem Eq. (8) exists! The variational formulation actually allows us to give a reasonable definition of “solving the differential equation” when a classical solution to does not exist. Our only requirement for the variational problem is that , itself, belongs to the space . A solution to the variational problem Eq. (8) is called a weak solution to the differential equation Eq. (5) because, as we have argued, a weak solution to Eq. (8) need not always solve Eq. (5).11One can show that any classical solution to Eq. (5) solves Eq. (8). Given certain conditions on , one can go the other way, showing that weak solutions are indeed bonafide classical solutions. This is the subject of regularity theory.

The Lax-Milgram Theorem

Let us now build up an abstract language which allows us to use Galerkin approximation both for linear systems of equations and PDEs (as well as other contexts). If one compares the expressions from the linear systems context and from the differential equation context, one recognizes that both these expressions are so-called bilinear forms: they depend on two arguments ( and or and ) and are a linear transformation in each argument independently if the other one is fixed. For example, if one defines one has . Similarly, if one defines , then .

Implicitly swimming in the background is some space of vectors or function which this bilinear form is defined upon. In the linear system of equations context, this space of -dimensional vectors and in the differential context, this space is as defined in the previous section.12The connection between vectors and functions is even more natural if one considers a function as a vector of infinite length, with one entry for each real number . Call this space . We shall assume that is a special type of linear space called a Hilbert space, an inner product space (with inner product ) where every Cauchy sequence converges to an element in (in the inner product-induced norm).13Note that every inner product space has a unique completion to a Hilbert space. For example, if one considers the space of smooth functions which are zero away from the boundary of with the inner product , the completion is . A natural extension to higher dimensions hold. The Cauchy sequence convergence property, also known as metric completeness, is important because we shall often deal with a sequence of entries which we will need to establish convergence to a vector . (Think of as a sequence of Galerkin approximations to a solution .)

With these formalities, an abstract variational problem takes the form

(9)

where is a bilinear form on and is a linear form on (a linear map ). There is a beautiful and general theorem called the Lax-Milgram theorem which establishes existence and uniqueness of solutions to a problem like Eq. (9).

Theorem (Lax-Milgram): Let and satisfy the following properties:

1. (Boundedness of ) There exists a constant such that every , .
2. (Coercivity) There exists a positive constant such that for every .
3. (Boundedness of ) There exists a constant such that for every .

Then the variational problem Eq. (9) possesses a unique solution.

For our cases, will also be symmetric for all . While the Lax-Milgram theorem holds without symmetry, let us continue our discussion with this additional symmetry assumption. Note that, taken together, properties (1-2) say that the -inner product, defined as , is no more than so much bigger or smaller than the standard inner product of and .14That is, one has that the -norm and the -norm are equivalent in the sense that . so the norms and define the same topology.

Let us now see how the Lax-Milgram theorem can apply to our two examples. For a reader who wants a more “big picture” perspective, they can comfortably skip to the next section. For those who want to see Lax-Milgram in action, see the discussion below.

Applying the Lax-Milgram Theorem
Begin with the linear system of equations with with inner product , , and . Note that we have the inequality .15This is an easy consequence of the Courant-Fischer theorem. More generally, note that, since is symmetric, has an orthonormal basis of eigenvectors with eigenvalues . Then and . The inequalities follow from noting the Parseval relation and noting that is a convex combination of the eigenvalues of . In particular, we have that . Property (1) then follows from the Cauchy-Schwarz inequality applied to the -inner product: . Property (2) is simply the established inequality . Property (3) also follows from the Cauchy-Schwarz inequality: . Thus, by Lax-Milgram, the variational problem for has a unique solution . Note that the linear systems example shows why the coercivity property (2) is necessary. If is positive semi-definite but not positive-definite, then there exists an eigenvector of with eigenvalue . Then for any positive constant and is singular, so the variational formulation of has no solution for some choices of the vector .

Applying the Lax-Milgram theorem to differential equations can require powerful inequalities. In this case, the -inner product is given by , , and . Condition (1) is follows from a application of the Cauchy-Schwarz inequality for integrals:16 In higher dimensions, we need even more Cauchy-Schwarz! First, we note that the absolute value of integral is less than the integral of absolute value . Second, we apply the Cauchy-Schwarz inequality for the vectors and to get where, e.g., is the Euclidean norm of the vector . This gives . Next, we apply the Cauchy-Schwarz inequality for integrals to get . Finally, we note that and thus obtain . This is the desired inequality with constant one.

(10)

Let’s go line-by-line. First, we note that the absolute value of integral is less than the integral of absolute value. Next, we apply the Cauchy-Schwarz inequality for integrals. Finally, we note that . This establishes Property (1) with constant . As we already see one third of the way into verifying the hypotheses of Lax-Milgram, establishing these inequalities can require several steps. Ultimately, however, strong knowledge of just a core few inequalities (e.g. Cauchy-Schwarz) may be all that’s needed.

Proving coercivity (Property (2)) actually requires a very special inequality, Poincaré’s inequality.17In higher dimensions, Poincaré’s inequality takes the form for a constant depending only on the domain . In it’s simplest incarnation, the inequality states that there exists a constant such that, for all functions ,18A simple proof of Poincaré’s inequality for continuously differentiable with goes as follows. Note that, by the fundamental theorem of calculus, . Applying the Cauchy-Schwarz inequality for integrals gives since . Thus for all integrating over gives . This proves Poincaré’s inequality with the constant .

(11)

With this inequality in tow, property (2) follows after another lengthy string of inequalities:19The same estimate holds in higher dimensions, with the appropriate generalization of Poincaré’s inequality.

(12)

For Property (3) to hold, the function must be square-integrable. With this hypothesis, Property (3) is much easier than Properties (1-2) and we leave it as an exercise for the interested reader (or to a footnote20The proof is similar in one dimension or higher dimensions, so we state it for arbitrary domain for brevity. By Cauchy-Schwarz, we have that . for the uninterested reader).

This may seem like a lot of work, but the result we have achieved is stunning. We have proven (modulo a lot of omitted details) that the Poisson equation has a unique weak solution as long as is square-integrable!21And in the footnotes, we have upgraded this proof to existence of a unique weak solution to the Poisson equation on a domain . What is remarkable about this proof is that it uses the Lax-Milgram theorem and some inequalities alone: no specialized knowledge about the physics underlying the Poisson equation were necessary. Going through the details of Lax-Milgram has been a somewhat lengthy affair for an introductory post, but hopefully this discussion has illuminated the power of functional analytic tools (like Lax-Milgram) in studying differential equations. Now, with a healthy dose of theory in hand, let us return to Galerkin approximation.

General Galerkin Approximation

With our general theory set up, Galerkin approximation for general variational problem is the same as it was for a system of linear equations. First, we pick an approximation space which is a subspace of . We then have the Galerkin variational problem

(13)

Provided and satisfy the conditions of the Lax-Milgram theorem, there is a unique solution to the problem Eq. (13). Moreover, the special property of Galerkin approximation holds: the error is -orthogonal to the subspace . Consequently, is te best approximate solution to the variational problem Eq. (9) in the -norm. To see the -orthogonality, we have that, for any ,

(14)

where we use the variational equation Eq. (9) for and Eq. (13) for . Note the similarities with Eq. (3). Thus, using the Pythagorean theorem for the -norm, for any other approximation solution , we have22Compare Eq. (4).

(15)

Put simply, is the best approximation to in the -norm.23Using the fact the norms and are equivalent in virtue of Properties (1-2), one can also show that is within a constant factor of the best approximation in the norm . This is known as Céa’s Lemma.

Galerkin approximation is powerful because it allows us to approximate an infinite-dimensional problem by a finite-dimensional one. If we let be a basis for the space , then the approximate solution can be represented as . Since form a basis of , to check that the Galerkin variational problem Eq. (13) holds for all it is sufficient to check that it holds for .24For an arbitrary can be written as , so . Thus, plugging in and into Eq. (13), we get (using bilinearity of )

(16)

If we define and , then this gives us a matrix equation for the unknowns parametrizing . Thus, we can compute our Galerkin approximation by solving a linear system of equations.

We’ve covered a lot of ground so let’s summarize. Galerkin approximation is a technique which allows us to approximately solve a large- or infinite-dimensional problem by searching for an approximate solution in a smaller finite-dimensional space of our choosing. This Galerkin approximation is the best approximate solution to our original problem in the -norm. By choosing a basis for our approximation space , we reduce the problem of computing a Galerkin approximation to a linear system of equations.

Design of a Galerkin approximation scheme for a variational problem thus boils down to choosing the approximation space and a basis . Picking to be a space of piecewise polynomial functions (splines) gives the finite element method. Picking to be a space spanned by a collection of trigonometric functions gives a Fourier spectral method. One can use a space spanned by wavelets as well. The Galerkin framework is extremely general: give it a subspace and it will give you a linear system of equations to solve to give you the best approximate solution in .

Two design considerations factor into the choice of space and basis . First, one wants to pick a space , where the solution almost lies in. This is the rationale behind spectral methods. Smooth functions are very well-approximated by short truncated Fourier expansions, so, if the solution is smooth, spectral methods will converge very quickly. Finite element methods, which often use low-order piecewise polynomial functions, converge much more slowly to a smooth . The second design consideration one wants to consider is the ease of solving the system resulting from the Galerkin approximation. If the basis function are local in the sense that most pairs of basis functions and aren’t nonzero at the same point (more formally, and have disjoint supports for most and ), the system will be sparse and thus usually much easier to solve. Traditional spectral methods usually result in a harder-to-solve dense linear systems of equations.25There are clever ways of making spectral methods which lead to sparse matrices. Conversely, if one uses high-order piecewise polynomials in a finite element approximation, one can get convergence properties similar to a spectral method. These are called spectral element methods. It should be noted that both spectral and finite element methods lead to ill-conditioned matrices , making integral equation-based approaches preferable if one needs high-accuracy.26For example, only one researcher using a finite-element method was able to meet Trefethen’s challenge to solve the Poisson equation to eight digits of accuracy on an L-shaped domain (see Section 6 of this paper). Getting that solution required using a finite-element method of order 15! Integral equations, themselves, are often solved using Galerkin approximation, leading to so-called boundary element methods.

Upshot: Galerkin approximation is a powerful and extremely flexible methodology for approximately solving large- or infinite-dimensional problems by finding the best approximate solution in a smaller finite-dimensional subspace. To use a Galerkin approximation, one must convert their problem to a variational formulation and pick a basis for the approximation space. After doing this, computing the Galerkin approximation reduces down to solving a system of linear equations with dimension equal to the dimension of the approximation space.

Big Ideas in Applied Math: Sparse Matrices

Sparse matrices are an indispensable tool for anyone in computational science. I expect there are a very large number of simulation programs written in scientific research across the country which could be faster by ten to a hundred fold at least just by using sparse matrices! In this post, we’ll give a brief overview what a sparse matrix is and how we can use them to solve problems fast.

A matrix is sparse if most of its entries are zero. There is no precise threshold for what “most” means; Kolda suggests that a matrix have at least 90% of its entries be zero for it to be considered sparse. The number of nonzero entries in a sparse matrix is denoted by . A matrix that is not sparse is said to be dense.

Sparse matrices are truly everywhere. They occur in finite difference, finite element, and finite volume discretizations of partial differential equations. They occur in power systems. They occur in signal processing. They occur in social networks. They occur in intermediate stages in computations with dense rank-structured matrices. They occur in data analysis (along with their higher-order tensor cousins).

Why are sparse matrices so common? In a word, locality. If the th entry of a matrix is nonzero, then this means that row and column are related in some way to each other according to the the matrix . In many situations, a “thing” is only related to a handful of other “things”; in heat diffusion, for example, the temperature at a point may only depend on the temperatures of nearby points. Thus, if such a locality assumption holds, every row will only have a small number of nonzero entries and the matrix overall will be sparse.

Storing and Multiplying Sparse Matrices

A sparse matrix can be stored efficiently by only storing its nonzero entries, along with the row and column in which these entries occur. By doing this, a sparse matrix can be stored in space rather than the standard for an matrix .1Here, refers to big-O notation. For the efficiency of many algorithms, it will be very beneficial to store the entries row-by-row or column-by-column using compressed sparse row and column (CSR and CSC) formats; most established scientific programming software environments support sparse matrices stored in one or both of these formats. For efficiency, it is best to enumerate all of the nonzero entries for the entire sparse matrix and then form the sparse matrix using a compressed format all at once. Adding additional entries one at a time to a sparse matrix in a compressed format requires reshuffling the entire data structure for each new nonzero entry.

There exist straightforward algorithms to multiply a sparse matrix stored in a compressed format with a vector to compute the product . Initialize the vector to zero and iterate over the nonzero entries of , each time adding to . It is easy to see this algorithm runs in time.2More precisely, this algorithm takes time since it requires operations to initialize the vector even if has no nonzero entries. We shall ignore this subtlety in the remainder of this article and assume that , which is true of most sparse matrices occurring in practice The fact that sparse matrix-vector products can be computed quickly makes so-called Krylov subspace iterative methods popular for solving linear algebraic problems involving sparse matrices, as these techniques only interact with the matrix by computing matrix-vector products (or matrix-tranpose-vector products ).

Lest the reader think that every operation with a sparse matrix is necessarily fast, the product of two sparse matrices and need not be sparse and the time complexity need not be . A counterexample is

(1)

for . We have that but

(2)

which has nonzero elements and requires operations to compute. However, if one does the multiplication in the other order, one has and the multiplication can be done in operations. Thus, some sparse matrices can be multiplied fast and others can’t. This phenomena of different speeds for different sparse matrices is very much also true for solving sparse linear systems of equations.

Solving Sparse Linear Systems

The question of how to solve a sparse system of linear equations where is sparse is a very deep problems with fascinating connections to graph theory. For this article, we shall concern ourselves with so-called sparse direct methods, which solve by means of computing a factorization of the sparse matrix . These methods produce an exact solution to the system if all computations are performed exactly and are generally considered more robust than inexact and iterative methods. As we shall see, there are fundamental limits on the speed of certain sparse direct methods, which make iterative methods very appealing for some problems.

Note from the outset that our presentation will be on illustrating the big ideas rather than presenting the careful step-by-step details needed to actually code a sparse direct method yourself. An excellent reference for the latter is Tim Davis’ wonderful book Direct Methods for Sparse Linear Systems.

Let us begin by reviewing how factorization works for general matrices. Suppose that the entry of is nonzero. Then, factorization proceeds by subtracting scaled multiples of the first row from the other rows to zero out the first column. If one keeps track of these scaling, then one can write this process as a matrix factorization, which we may demonstrate pictorially as

(3)

Here, ‘s denote nonzero entries and blanks denote zero entries. We then repeat the process on the submatrix in the bottom right (the so-called Schur complement). Continuing in this way, we eventually end up with a complete factorization

(4)

In the case that is symmetric positive definite (SPD), one has that for a diagonal matrix consisting of the entries on . This factorization is a Cholesky factorization of .3Often, the Cholesky factorization is written as for or for . These different forms all contain the same basic information, so we shall stick with the formulation in this post. For general non-SPD matrices, one needs to incorporate partial pivoting for Gaussian elimination to produce accurate results.4See the excellent monograph Accuracy and Stability of Numerical Algorithms for a comprehensive treatment of this topic.

Let’s try the same procedure for a sparse matrix. Consider a sparse matrix with the following sparsity pattern:

(5)

When we eliminate the entry, we get the following factorization:

(6)

Note that the Schur complement has new additional nonzero entries (marked with a ) not in the original sparse matrix . The Schur complement of is denser than was; there are new fill-in entries. The worst-case scenario for fill-in is the arrowhead matrix:

(7)

After one step of Gaussian elimination, we went from a matrix with nonzeros to a fully dense Schur complement! However, the arrowhead matrix also demonstrates a promising strategy. Simply construct a permutation matrix which reorders the first entry to be the last5For instance, the circular shift permutation . and then perform Gaussian elimination on the symmetrically permuted matrix instead. In fact, the entire factorization can be computed without fill-in:

(8)

This example shows the tremendous importance of reordering of the rows and columns when computing a sparse factorization.

The Best Reordering

As mentioned above, when computing an factorization of a dense matrix, one generally has to reorder the rows (and/or columns) of the matrix to compute the solution accurately. Thus, when computing the factorization of a sparse matrix, one has to balance the need to reorder for accuracy and to reorder to reduce fill-in. For these reasons, for the remainder of this post, we shall focus on computing Cholesky factorizations of SPD sparse matrices, where reordering for accuracy is not necessary.6For ill-conditioned and positive semi-definite matrices, one may want to reorder a Cholesky factorization so the result is rank-revealing. This review article has a good discussion of pivoted Cholesky factorization. For most applications, one can successfully compute an accurate Cholesky factorization without any specific accuracy-focused reordering strategy. Since we want the matrix to remain SPD, we must restrict ourselves to symmetric reordering strategies where is reordered to where is a permutation matrix.

Our question is deceptively simple: what reordering produces the least fill-in? In matrix language, what permutation minimizes where is the Cholesky factorization of ?

Note that, assuming no entries in the Gaussian elimination process exactly cancel, then the Cholesky factorization depends only on the sparsity pattern of (the locations of the zeros and nonzeros) and not on the actual numeric values of ‘s entries. This sparsity structure is naturally represented by a graph whose nodes are the indices with an edge between if, and only if, .

Now let’s see what happens when we do Gaussian elimination from a graph point-of-view. When we eliminate the entry from matrix, this results in all nodes of the graph adjacent to becoming connected to each other.7Graph theoretically, we add a clique containing the nodes adjacent to

This shows why the arrowhead example is so bad. By eliminating the a vertex connected to every node in the graph, the eliminated graph becomes a complete graph.

Reordering the matrix corresponds to choosing in what order the vertices of the graph are eliminated. Choosing the elimination order is then a puzzle game; eliminate all the vertices of the graph in the order that produces the fewest fill-in edges (shown red).8This “graph game” formulation of sparse Gaussian elimination is based on how I learned it from John Gilbert. His slides are an excellent resource for all things sparse matrices!

Finding the best elimination ordering for a sparse matrix (graph) is a good news/bad news situation. For the good news, many graphs possess a perfect elimination ordering, in which no fill-in is produced at all. There is a simple algorithm to determine whether a graph (sparse matrix) possesses a perfect elimination ordering and if so, what it is.9The algorithm is basically just a breadth-first search. Some important classes of graphs can be eliminated perfectly (for instance, trees). More generally, the class of all graphs which can be eliminated perfectly is precisely the set of chordal graphs, which are well-studied in graph theory.

Now for the bad news. The problem of finding the best elimination ordering (with the least fill-in) for a graph is NP-Hard. This means, assuming the widely conjectured result that , that finding the best elimination ordering would be a hard computational problem than the worst-case complexity for doing Gaussian elimination in any ordering! One should not be too pessimistic about this result, however, since (assuming ) all it says is that there exists no polynomial time algorithm guaranteed to produce the absolutely best possible elimination ordering when presented with any graph (sparse matrix). If one is willing to give up on any one of the bolded statements, further progress may be possible. For instance, there exists several good heuristics, which find reasonably good elimination orderings for graphs (sparse matrices) in linear (or nearly linear) time.

Can Sparse Matrices be Eliminated in Linear Time?

Let us think about the best reordering question in a different way. So far, we have asked the question “Can we find the best ordering for a sparse matrix?” But another question is equally important: “How efficiently can we solve a sparse matrix, even with the best possible ordering?”

One might optimistically hope that every sparse matrix possesses an elimination ordering such that its Cholesky factorization can be computed in linear time (in the number of nonzeros), meaning that the amount of time needed to solve is proportional to the amount of data needed to store the sparse matrix .

When one tests a proposition like this, one should consider the extreme cases. If the matrix is dense, then it requires operations to do Gaussian elimination,10This is neglecting the possibility of acceleration by Strassentype fast matrix multiplication algorithms. For simplicity, we shall ignore these fast multiplication techniques for the remainder of this post and assume dense can be solved no faster than operations. but only has nonzero entries. Thus, our proposition cannot hold in unmodified form.

An even more concerning counterexample is given by a matrix whose graph is a 2D grid graph.

Sparse matrices with this sparsity pattern (or related ones) appear all the time in discretized partial differential equations in two dimensions. Moreover, they are truly sparse, only having nonzero entries. Unforunately, no linear time elimination ordering exists. We have the following theorem:

Theorem: For any elimination ordering for a sparse matrix with being a 2D grid graph, in any elimination ordering, the Cholesky factorization requires operations and satisfies .11Big-Omega notation is a cousin of Big-O notation. One should read as “ is no less than a constant multiple of , asymptotically”.

The proof is contained in Theorem 10 and 11 (and the ensuing paragraph) of classic paper by Lipton, Rose, and Tarjan. Natural generalizations to -dimensional grid graphs give bounds of time and for . In particular, for 2D finite difference and finite element discretizations, sparse Cholesky factorization takes operations and produces a Cholesky factor with in the best possible ordering. In 3D, sparse Cholesky factorization takes operations and produces a Cholesky factor with in the best possible ordering.

Fortunately, at least these complexity bounds are attainable: there is an ordering which produces a sparse Cholesky factorization with