Markov Musings 1: The Fundamental Theorem

For this summer, I’ve decided to open up another little mini-series on this blog called Markov Musings about the mathematical analysis of Markov chains, jumping off from my previous post on the subject. My main goal in writing this is to learn the material for myself, and I hope that what I produce is useful to others. My main resources are:

  1. The book Markov Chains and Mixing Times by Levin, Peres, and Wilmer;
  2. Lecture notes and videos by theoretical computer scientists Sinclair, Oveis Gharan, O’Donnell, and Schulman; and
  3. These notes by Rob Webber, for a complementary perspective from a scientific computing point of view.

Be warned, these posts will be more mathematical in nature than most of the material on my blog.


In my previous post on Markov chains, we discussed the fundamental theorem of Markov chains. Here is a slightly stronger version:

Theorem (fundamental Theorem of Markov chains). A primitive Markov chain on a finite state space has a stationary distribution \pi > 0. When initialized from any starting distribution \rho^{(0)}, the distributions \rho^{(0)},\rho^{(1)},\rho^{(2)},\ldots of the chain at times 0,1,2,\ldots converge at an exponential rate to \pi.

My goal in this post will be to provide a proof of this fact using the method of couplings, adapted from the notes of Sinclair and Oveis Gharan. I like this proof because it feels very probabilistic (as opposed to more linear algebraic proofs of the fundamental theorem).

Here, and throughout, we say a matrix or vector is > 0 if all of its entries are strictly positive. Recall that a Markov chain with transition matrix P is primitive if there exists n for which P^n > 0. For this post, all Markov chains will have state space \{1,\ldots,m\}.

Total Variation Distance

In order to quantify the rate of Markov chain convergence, we need a way of quantifying the closeness of two probability distributions. This motivates the following definition:

Definition (total variation distance). The total variation distance between probability distributions \rho and \sigma on \{1,\ldots,m\} is the maximum difference between the probability of an event S under \rho and under \sigma:

    \[\norm{\rho - \sigma}_{\rm TV} = \max_{S \subseteq \{1,\ldots,m\}} |\rho(S) - \sigma(S)| = \frac{1}{2} \sum_{i=1}^m \left| \rho_i - \sigma_i \right|.\]

The total variation distance is always between 0 and 1. It is zero only when \rho and \sigma are the same distribution. It is one only when \rho and \sigma have disjoint supports—that is, there is no i \in \{1,\ldots,m\} for which \rho_i, \sigma_i > 0.

The total variation distance is a very strict way of comparing two probability distributions. Sinclair’s notes provide a vivid example. Suppose that \rho denotes the uniform distribution on all possible ways of shuffling a deck of N cards, and \sigma denotes the uniform distribution on all ways of shuffling N cards with the ace of spades at the top. Then the total variation distance between \rho and \sigma is 1 - 1/N. Thus, despite these distributions seeming quite similar to us, the total variation distance between \rho and \sigma is almost as far apart as possible. There are a number of alternative ways of measuring the closeness of probability distributions, some of which are less severe.

Couplings

Given a probability distribution \rho, it can be helpful to work with random variables drawn from \rho. Say a random variable x is drawn from the distribution \rho, written x \sim \rho, if

    \[\prob \{x = i\} = \rho_i \quad \text{for $i=1,2,\ldots,m$}.\]

To understand the total variation distance more, we shall need the following definition:

Definition (coupling). Given probability distributions \rho,\sigma on \{1,\ldots,m\}, a coupling \gamma is a distribution on \{1,\ldots,m\}^2 such that if a pair of random variables (x,y)\sim\gamma is drawn from \gamma, then x \sim \rho and y \sim \sigma. Denote the set of all couplings of \rho and \sigma as \operatorname{Couplings}(\rho,\sigma).

More succinctly, a coupling of \rho and \sigma is a joint distribution with marginals \rho and \sigma.

Couplings are related to total variation distance by the following lemma:1A proof is provided in Lemma 4.2 of Oveis Gharan’s notes. The coupling lemma holds in the full generality of probability measures on general spaces, and can be viewed as a special case of the Monge–Kantorovich duality principle of optimal transport. See Theorem 4.13 and Example 4.14 in van Handel’s notes for details.

Lemma (coupling lemma). Let \rho and \sigma be distributions on \{1,\ldots,m\}. Then

    \[\norm{\rho - \sigma}_{\rm TV} = \min_{\gamma \in \operatorname{Couplings}(\rho,\sigma)} \prob_{(x,y) \sim \gamma} \{ x \ne y \}.\]

Here, \prob_{(x,y) \sim \gamma} represents the probability for variables x,y drawn from joint distribution \gamma.

To see a simple example, suppose \rho = \sigma. Then the coupling lemma tells us that there is a coupling \gamma of \rho and itself such that \prob \{ x \ne y \} = 0. Indeed, such a coupling can be obtained by drawing x \sim \rho and setting y \coloneqq x. This defines a joint distribution \gamma under which x = y with 100% probability.

To unpack the coupling lemma a little more, it really contains two statements:

  • For any coupling \gamma between \rho and \sigma and (x,y) \sim \gamma,

        \[\norm{\rho - \sigma}_{\rm TV} \le \prob \{x \ne y \}.\]

  • There exists a coupling \gamma between \rho and \sigma such that when (x,y) \sim \gamma, then

        \[\norm{\rho - \sigma}_{\rm TV} = \prob \{x \ne y\}.\]

We will need both of these statements in our proof of the fundamental theorem.

Proof of the Fundamental Theorem

With these ingredients in place, we are now ready to prove the fundamental theorem of Markov chains. First, we will assume there exists a stationary distribution \pi > 0. We will provide a proof of this fact at the end of this post.

Suppose we initialize the chain in distribution \rho^{(0)}, and let \rho^{(0)},\rho^{(1)},\rho^{(2)},\ldots denote the distributions of the chain at times 0,1,2,\ldots. Our goal will be to establish that \norm{\rho^{(n)} - \pi}_{\rm TV} \to 0 as n\to\infty at an exponential rate.

Distance to Stationarity is Non-Increasing

First, let us establish the more modest claim that \norm{\rho^{(n)} - \pi}_{\rm TV} is non-increasing

(1)   \[\norm{\rho^{(n+1)} - \pi}_{\rm TV} \le \norm{\rho^{(n)} - \pi}_{\rm TV} \quad \text{for every } n =0,1,2\ldots. \]

We shall do this by means of the coupling lemma.

Consider two versions of the chain x_0,x_1,x_2,\ldots and y_0,y_1,y_2,\ldots, one initialized in x_0 \sim \rho^{(0)} and the other initialized with y_0 \sim \pi. We now apply the coupling lemma to the states x_n and y_n of the chains at time n. By the coupling lemma, there exists a coupling of x_n and y_n such that

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} = \prob \{x_n\ne y_n\}.\]

Now construct a coupling of x_{n+1} and y_{n+1} according to the following rules:

  • If x_n = y_n, then draw x_{n+1} according to the transition matrix and set y_{n+1} \coloneqq x_{n+1}.
  • If x_n \ne y_n, then run the two chains independently to generate x_{n+1} and y_{n+1}.

By the way we’ve designed the coupling,

    \[\prob\{x_{n+1}\ne y_{n+1}\} \le \prob\{x_n \ne y_n\}.\]

Thus, by the coupling lemma,

    \[\norm{\rho^{(n+1)} - \pi}_{\rm TV} \le \prob\{x_{n+1}\ne y_{n+1}\} \le \prob\{x_n \ne y_n\} = \norm{\rho^{(n)} - \pi}_{\rm TV}.\]

We have established that the distance to stationarity is non-increasing.

This proof already contains the essence of the argument as to why Markov chains mix. We run two versions of the Markov chain, one initialized in an arbitrary distribution \rho^{(0)} and the other initialized in the stationary distribution \pi. While the states of the two chains are different, we run the chains independently. When the chains meet, we continue moving the chains together in synchrony. After running for long enough, the two chains are likely to meet, implying the chain has mixed.

The All-to-All Case

As another stepping stone to the complete proof, let us prove the fundamental theorem in the special case where there is a strictly positive probability of moving between any two states, i.e., assuming P>0.

Consider the two chains x_0,x_1,x_2,\ldots and y_0,y_1,y_2,\ldots coupled as in the previous section. We compute the probability \prob \{x_{n+1} \ne y_{n+1}\} more carefully. Write it as

(2)   \begin{align*}\prob \{x_{n+1} \ne y_{n+1}\} &= \prob \{x_{n+1} \ne y_{n+1} \mid x_n \ne y_n\}\prob \{x_n \ne y_n\} \\&= (1-\prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\})\prob \{x_n \ne y_n\}. \end{align*}


To compute \prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\}), break into cases for all possible values i,j,k for y_{n+1},x_n,y_n to take

    \begin{gather*}\prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\} \\= \sum_{\substack{i,j,k\in \{1,\ldots,m\}\\ j\ne k}} \prob \{x_{n+1} =i \mid y_{n+1}=i,x_n=j,y_n=k\} \prob \{y_{n+1}=i,x_n=j,y_n=k \mid x_n \ne y_n\}.\end{gather*}

We now are in a place to lower bound this probability. Let p_{\rm min} be the minimum probability of moving between any two states

    \[p_{\rm min} \coloneqq \min_{1\le i,j\le m} P_{ij}.\]

The probability of moving from, j to i is at least p_{\rm min}. We conclude the lower bound

    \begin{align*}\prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\} &\ge \sum_{\substack{i,j,k\in \{1,\ldots,m\}\\ j\ne k}} p_{\rm min} \prob \{y_{n+1}=i,x_n=j,y_n=k \mid x_n \ne y_n\} = p_{\rm min}.\end{align*}

Substituting back in (2), we obtain

    \[\prob \{x_{n+1} \ne y_{n+1}\} \le (1 - p_{\rm min})\prob \{x_n \ne y_n\}.\]

By the coupling lemma, we conclude

    \[\norm{\rho^{(n+1)}-\pi}_{\rm TV} \le \prob \{x_{n+1} \ne y_{n+1}\} \le (1 - p_{\rm min})\prob \{x_n \ne y_n\} = (1 - p_{\rm min}) \norm{\rho^{(n)} - \pi}_{\rm TV}.\]

By iteration, we conclude

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le (1 - p_{\rm min})^n \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1 - p_{\rm min})^n.\]

The chain converges to stationarity at an exponential rate, as claimed.

The General Case

We’ve now proved the fundamental theorem in the special case when P > 0. Fortunately, together with our earlier observation that distance to stationarity is non-increasing, we can upgrade this proof into a proof for the general case.

We have assumed the Markov chain x_0,x_1,x_2,\ldots is primitive, so there exists a time n_0 for which P^{n_0} > 0. Construct an auxilliary Markov chain z_0,z_1,z_2,\ldots such that one step of the auxilliary chain consists of running n_0 steps of the original chain:

    \[z_0 = x_0, \:z_1 = x_{n_0}, \:z_2 = x_{2n_0},\ldots.\]

By the all-to-all case, we know that z_0,z_1,z_2,\ldots converges to stationarity at an exponential rate. Thus, since the distribution of z_k = x_{k\cdot n_0} is \rho^{(k\cdot n_0)}, we have

    \[\norm{\rho^{(k\cdot n_0)} - \pi}_{\rm TV} \le (1-\delta)^k \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1-\delta)^k \quad \text{for }k=0,1,2,\ldots,\]

where \delta \coloneqq \min_{1\le i,j\le m} (P^{n_0})_{ij} > 0. Thus, since distance to stationarity is non-increasing, we have

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \norm{\rho^{(n_0 \cdot \lfloor n/n_0\rfloor)} - \pi}_{\rm TV} \le (1-\delta)^{\lfloor n/n_0\rfloor} \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1-\delta)^{\lfloor n/n_0\rfloor}.\]

Thus, for any starting distribution \rho^{(0)}, the distribution of the chain \rho^{(n)} at time n converges to stationarity at an exponential rate as n\to\infty, proving the fundamental theorem.

Mixing Time

We’ve proven a quantiative version of the fundamental theorem of Markov chains, showing that the total variation distance to stationarity decreases exponentially as a function of time. For algorithmic applications of Markov chains, we also care about the rate of convergence, as it dictates how long we need to run the chain. To this end, we define the mixing time:

Definition (mixing time). The mixing time \tau_{\rm mix} of a Markov chain is the number of steps required for the distance to stationarity to be at most 1/2e when started from a worst-case distribution:

    \[\tau_{\rm mix} \coloneqq \min \left\{ n \ge 1 : \max_{\rho^{(0)}} \norm{\rho^{(n)} - \pi}_{\rm TV} \le \frac{1}{2e} \right\}.\]

The mixing time controls the rate of convergence for a Markov chain:

Theorem (mixing time as a convergence rate). For any starting distribution,

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le e^{-\lfloor n / \tau_{\rm mix}\rfloor}.\]

In particular, for \rho^{(n)} to be within \varepsilon total variation distance of \pi, we only need to run the chain for \tau_{\rm mix} \cdot \lceil \log(1/\varepsilon) \rceil steps:

Corollary (time to mix to \varepsilon-stationarity). If n\ge \tau_{\rm mix} \cdot \lceil \log(1/\epsilon)\rceil, then \norm{\rho^{(n)} - \pi}_{\rm TV} \le \varepsilon.

These results can be proven using very similar techniques to the proof of the fundamental theorem from above. See Sinclair’s notes for more details.

Bonus: Existence of a Stationary Measure
To complete our probabilistic proof of the Markov chain convergence theorem, we must establish the existance of a stationary measure. We do this now.

Fix any state i \in \{1,\ldots,m\}. Imagine starting the chain at i and running it until it reaches i again. Let a_j be the expected number of times the chain hits j in such a process, and set a_i \coloneqq 1. Because the chain is primitive, all of the a_j‘s are well-defined, positive, and finite. Our claim will be that

    \[\pi_i = \frac{a_i}{\sum_{j=1}^m a_j}.\]


is a stationary distribution for the chain. To prove this, it is sufficient to show that

(3)   \[a^\top P = a^\top. \]



Let us prove this. Let x_0 = i,x_1,x_2,\ldots denote the values of the chain and n_{\rm ret} denote the time at which the chain returns to i. By linearity of expectation, the expected number of hits a_j is the sum over all times n of the probability that the chain is at j at time n before hitting i. That is,

    \[a_j = \sum_{n=1}^\infty \prob\{x_n = j, n_{\rm ret} > n\}.\]

Break this sum into two pieces

    \[a_j = \prob\{x_1 = j\} + \sum_{n=2}^\infty \prob\{x_n = j, n_{\rm ret} > n\}.\]

The first term is just the transition probability P_{ij}. For the second term, break into cases depending on the value of the chain at time n-1:

    \begin{align*}\prob\{x_n = j, n_{\rm ret} > n\} &= \sum_{k\ne i} \prob\{x_{n-1} = k, x_n = j, n_{\rm ret} > n-1 \} \\&= \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} \prob\{x_n = j \mid x_{n-1} = k\} \\&= \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} P_{kj}.\end{align*}

Combining these two terms, we get

    \[a_j = P_{ij} + \sum_{n=2}^\infty \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} P_{kj}.\]

Relabel the outer sum to go from n=1 to \infty and exchange the order of summation to obtain

    \[a_j = P_{ij} + \sum_{k\ne i} \left(\sum_{n=1}^\infty \prob\{x_n = k, n_{\rm ret} > n \}\right) P_{kj}.\]

Recognize the term in the parentheses as a_k. Thus, since a_i = 1, we have

    \[a_j = a_iP_{ij} + \sum_{k\ne i} a_k P_{kj} = \sum_{k=1}^m a_k P_{kj},\]

which is exactly the claim (3) we wanted to show.

3 thoughts on “Markov Musings 1: The Fundamental Theorem

Leave a Reply

Your email address will not be published. Required fields are marked *