For this summer, I’ve decided to open up another little mini-series on this blog called Markov Musings about the mathematical analysis of Markov chains, jumping off from my previous post on the subject. My main goal in writing this is to learn the material for myself, and I hope that what I produce is useful to others. My main resources are:
- The book Markov Chains and Mixing Times by Levin, Peres, and Wilmer;
- Lecture notes and videos by theoretical computer scientists Sinclair, Oveis Gharan, O’Donnell, and Schulman; and
- These notes by Rob Webber, for a complementary perspective from a scientific computing point of view.
Be warned, these posts will be more mathematical in nature than most of the material on my blog.
In my previous post on Markov chains, we discussed the fundamental theorem of Markov chains. Here is a slightly stronger version:
Theorem (fundamental Theorem of Markov chains). A primitive Markov chain on a finite state space has a stationary distribution
. When initialized from any starting distribution
, the distributions
of the chain at times
converge at an exponential rate to
.
My goal in this post will be to provide a proof of this fact using the method of couplings, adapted from the notes of Sinclair and Oveis Gharan. I like this proof because it feels very probabilistic (as opposed to more linear algebraic proofs of the fundamental theorem).
Here, and throughout, we say a matrix or vector is  if all of its entries are strictly positive. Recall that a Markov chain with transition matrix
 if all of its entries are strictly positive. Recall that a Markov chain with transition matrix  is primitive if there exists
 is primitive if there exists  for which
 for which  . For this post, all Markov chains will have state space
. For this post, all Markov chains will have state space  .
.
Total Variation Distance
In order to quantify the rate of Markov chain convergence, we need a way of quantifying the closeness of two probability distributions. This motivates the following definition:
Definition (total variation distance). The total variation distance between probability distributions
and
on
is the maximum difference between the probability of an event
under
and under
:
The total variation distance is always between  and
 and  . It is zero only when
. It is zero only when  and
 and  are the same distribution. It is one only when
 are the same distribution. It is one only when  and
 and  have disjoint supports—that is, there is no
 have disjoint supports—that is, there is no  for which
 for which  .
.
The total variation distance is a very strict way of comparing two probability distributions. Sinclair’s notes provide a vivid example. Suppose that  denotes the uniform distribution on all possible ways of shuffling a deck of
 denotes the uniform distribution on all possible ways of shuffling a deck of  cards, and
 cards, and  denotes the uniform distribution on all ways of shuffling
 denotes the uniform distribution on all ways of shuffling  cards with the ace of spades at the top. Then the total variation distance between
 cards with the ace of spades at the top. Then the total variation distance between  and
 and  is
 is  . Thus, despite these distributions seeming quite similar to us, the total variation distance between
. Thus, despite these distributions seeming quite similar to us, the total variation distance between  and
 and  is almost as far apart as possible. There are a number of alternative ways of measuring the closeness of probability distributions, some of which are less severe.
 is almost as far apart as possible. There are a number of alternative ways of measuring the closeness of probability distributions, some of which are less severe.
Couplings
Given a probability distribution  , it can be helpful to work with random variables drawn from
, it can be helpful to work with random variables drawn from  . Say a random variable
. Say a random variable  is drawn from the distribution
 is drawn from the distribution  , written
, written  , if
, if
      ![Rendered by QuickLaTeX.com \[\prob \{x = i\} = \rho_i \quad \text{for $i=1,2,\ldots,m$}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-3537663be586f21278e7e32b7502af94_l3.png)
To understand the total variation distance more, we shall need the following definition:
Definition (coupling). Given probability distributions
on
, a coupling
is a distribution on
such that if a pair of random variables
is drawn from
, then
and
. Denote the set of all couplings of
and
as
.
More succinctly, a coupling of  and
 and  is a joint distribution with marginals
 is a joint distribution with marginals  and
 and  .
.
Couplings are related to total variation distance by the following lemma:1A proof is provided in Lemma 4.2 of Oveis Gharan’s notes. The coupling lemma holds in the full generality of probability measures on general spaces, and can be viewed as a special case of the Monge–Kantorovich duality principle of optimal transport. See Theorem 4.13 and Example 4.14 in van Handel’s notes for details.
Lemma (coupling lemma). Let
and
be distributions on
. Then
Here,
represents the probability for variables
drawn from joint distribution
.
To see a simple example, suppose  . Then the coupling lemma tells us that there is a coupling
. Then the coupling lemma tells us that there is a coupling  of
 of  and itself such that
 and itself such that  . Indeed, such a coupling can be obtained by drawing
. Indeed, such a coupling can be obtained by drawing  and setting
 and setting  . This defines a joint distribution
. This defines a joint distribution  under which
 under which  with 100% probability.
 with 100% probability.
To unpack the coupling lemma a little more, it really contains two statements:
- For any coupling  between between and and and and , ,![Rendered by QuickLaTeX.com \[\norm{\rho - \sigma}_{\rm TV} \le \prob \{x \ne y \}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-2b881043ba005f53ae84ada8ebe53191_l3.png) 
- There exists a coupling  between between and and such that when such that when , then , then![Rendered by QuickLaTeX.com \[\norm{\rho - \sigma}_{\rm TV} = \prob \{x \ne y\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-e692d3e9715ed9b325be70775ba0eac3_l3.png) 
We will need both of these statements in our proof of the fundamental theorem.
Proof of the Fundamental Theorem
With these ingredients in place, we are now ready to prove the fundamental theorem of Markov chains. First, we will assume there exists a stationary distribution  . We will provide a proof of this fact at the end of this post.
. We will provide a proof of this fact at the end of this post.
Suppose we initialize the chain in distribution  , and let
, and let  denote the distributions of the chain at times
 denote the distributions of the chain at times  . Our goal will be to establish that
. Our goal will be to establish that  as
 as  at an exponential rate.
 at an exponential rate.
Distance to Stationarity is Non-Increasing
First, let us establish the more modest claim that  is non-increasing
 is non-increasing 
 (1)    ![Rendered by QuickLaTeX.com \[\norm{\rho^{(n+1)} - \pi}_{\rm TV} \le \norm{\rho^{(n)} - \pi}_{\rm TV} \quad \text{for every } n =0,1,2\ldots. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-49f8b3e1681a778cc0540ab25def9d1c_l3.png)
Consider two versions of the chain  and
 and  , one initialized in
, one initialized in  and the other initialized with
 and the other initialized with  . We now apply the coupling lemma to the states
. We now apply the coupling lemma to the states  and
 and  of the chains at time
 of the chains at time  . By the coupling lemma, there exists a coupling of
. By the coupling lemma, there exists a coupling of  and
 and  such that
 such that 
      ![Rendered by QuickLaTeX.com \[\norm{\rho^{(n)} - \pi}_{\rm TV} = \prob \{x_n\ne y_n\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-8e78bf69e3ca31ba02e37fa3a64e5f52_l3.png)
 and
 and  according to the following rules:
 according to the following rules:
- If  , then draw , then draw according to the transition matrix and set according to the transition matrix and set . .
- If  , then run the two chains independently to generate , then run the two chains independently to generate and and . .
By the way we’ve designed the coupling,
      ![Rendered by QuickLaTeX.com \[\prob\{x_{n+1}\ne y_{n+1}\} \le \prob\{x_n \ne y_n\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-3904173ffb700f39526fb117f5a6c248_l3.png)
      ![Rendered by QuickLaTeX.com \[\norm{\rho^{(n+1)} - \pi}_{\rm TV} \le \prob\{x_{n+1}\ne y_{n+1}\} \le \prob\{x_n \ne y_n\} = \norm{\rho^{(n)} - \pi}_{\rm TV}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-2409e83e870c68fe8d7f8e1a94903325_l3.png)
We have established that the distance to stationarity is non-increasing.
This proof already contains the essence of the argument as to why Markov chains mix. We run two versions of the Markov chain, one initialized in an arbitrary distribution  and the other initialized in the stationary distribution
 and the other initialized in the stationary distribution  . While the states of the two chains are different, we run the chains independently. When the chains meet, we continue moving the chains together in synchrony. After running for long enough, the two chains are likely to meet, implying the chain has mixed.
. While the states of the two chains are different, we run the chains independently. When the chains meet, we continue moving the chains together in synchrony. After running for long enough, the two chains are likely to meet, implying the chain has mixed.
The All-to-All Case
As another stepping stone to the complete proof, let us prove the fundamental theorem in the special case where there is a strictly positive probability of moving between any two states, i.e., assuming  .
.
Consider the two chains  and
 and  coupled as in the previous section. We compute the probability
 coupled as in the previous section. We compute the probability  more carefully. Write it as
 more carefully. Write it as
 (2)    
To compute
 , break into cases for all possible values
, break into cases for all possible values  for
 for  to take
 to take       
 be the minimum probability of moving between any two states
 be the minimum probability of moving between any two states      ![Rendered by QuickLaTeX.com \[p_{\rm min} \coloneqq \min_{1\le i,j\le m} P_{ij}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-ca934d28f674adb9b8bc14ef320dd327_l3.png)
 to
 to  is at least
 is at least  . We conclude the lower bound
. We conclude the lower bound      
Substituting back in (2), we obtain
      ![Rendered by QuickLaTeX.com \[\prob \{x_{n+1} \ne y_{n+1}\} \le (1 - p_{\rm min})\prob \{x_n \ne y_n\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-69b8b6e06ad981139c7c6ffd1ae6d1ba_l3.png)
      ![Rendered by QuickLaTeX.com \[\norm{\rho^{(n+1)}-\pi}_{\rm TV} \le \prob \{x_{n+1} \ne y_{n+1}\} \le (1 - p_{\rm min})\prob \{x_n \ne y_n\} = (1 - p_{\rm min}) \norm{\rho^{(n)} - \pi}_{\rm TV}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-ce7570327a59ec2ede1a579d7f13564d_l3.png)
      ![Rendered by QuickLaTeX.com \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le (1 - p_{\rm min})^n \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1 - p_{\rm min})^n.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-9119337225ce34aa99582b2b53069300_l3.png)
The General Case
We’ve now proved the fundamental theorem in the special case when  . Fortunately, together with our earlier observation that distance to stationarity is non-increasing, we can upgrade this proof into a proof for the general case.
. Fortunately, together with our earlier observation that distance to stationarity is non-increasing, we can upgrade this proof into a proof for the general case.
We have assumed the Markov chain  is primitive, so there exists a time
 is primitive, so there exists a time  for which
 for which  . Construct an auxilliary Markov chain
. Construct an auxilliary Markov chain  such that one step of the auxilliary chain consists of running
 such that one step of the auxilliary chain consists of running  steps of the original chain:
 steps of the original chain:
      ![Rendered by QuickLaTeX.com \[z_0 = x_0, \:z_1 = x_{n_0}, \:z_2 = x_{2n_0},\ldots.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-e5c67b34610d3cf8834e42621fd85218_l3.png)
 converges to stationarity at an exponential rate. Thus, since the distribution of
 converges to stationarity at an exponential rate. Thus, since the distribution of  is
 is  , we have
, we have      ![Rendered by QuickLaTeX.com \[\norm{\rho^{(k\cdot n_0)} - \pi}_{\rm TV} \le (1-\delta)^k \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1-\delta)^k \quad \text{for }k=0,1,2,\ldots,\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-f00ec64a989f2c387244cc13ca79cb63_l3.png)
 . Thus, since distance to stationarity is non-increasing, we have
. Thus, since distance to stationarity is non-increasing, we have      ![Rendered by QuickLaTeX.com \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \norm{\rho^{(n_0 \cdot \lfloor n/n_0\rfloor)} - \pi}_{\rm TV} \le (1-\delta)^{\lfloor n/n_0\rfloor} \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1-\delta)^{\lfloor n/n_0\rfloor}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-c8c82df8e81ce5b1ca4fcbae23a11f4d_l3.png)
 , the distribution of the chain
, the distribution of the chain  at time
 at time  converges to stationarity at an exponential rate as
 converges to stationarity at an exponential rate as  , proving the fundamental theorem.
, proving the fundamental theorem.
Mixing Time
We’ve proven a quantiative version of the fundamental theorem of Markov chains, showing that the total variation distance to stationarity decreases exponentially as a function of time. For algorithmic applications of Markov chains, we also care about the rate of convergence, as it dictates how long we need to run the chain. To this end, we define the mixing time:
Definition (mixing time). The mixing time
of a Markov chain is the number of steps required for the distance to stationarity to be at most
when started from a worst-case distribution:
The mixing time controls the rate of convergence for a Markov chain:
Theorem (mixing time as a convergence rate). For any starting distribution,
In particular, for  to be within
 to be within  total variation distance of
 total variation distance of  , we only need to run the chain for
, we only need to run the chain for  steps:
 steps:
Corollary (time to mix to
-stationarity). If
, then
.
These results can be proven using very similar techniques to the proof of the fundamental theorem from above. See Sinclair’s notes for more details.
![Rendered by QuickLaTeX.com \[\norm{\rho - \sigma}_{\rm TV} = \max_{S \subseteq \{1,\ldots,m\}} |\rho(S) - \sigma(S)| = \frac{1}{2} \sum_{i=1}^m \left| \rho_i - \sigma_i \right|.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-7dd0b0e121d45408883369794becedff_l3.png)
![Rendered by QuickLaTeX.com \[\norm{\rho - \sigma}_{\rm TV} = \min_{\gamma \in \operatorname{Couplings}(\rho,\sigma)} \prob_{(x,y) \sim \gamma} \{ x \ne y \}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-918278c5556c178aee214db808000551_l3.png)
![Rendered by QuickLaTeX.com \[\tau_{\rm mix} \coloneqq \min \left\{ n \ge 1 : \max_{\rho^{(0)}} \norm{\rho^{(n)} - \pi}_{\rm TV} \le \frac{1}{2e} \right\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-ccf82253d6d08b59e651ef0c81a44e57_l3.png)
![Rendered by QuickLaTeX.com \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le e^{-\lfloor n / \tau_{\rm mix}\rfloor}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-8a28da28e02550f209ce8ec1c8b5b79e_l3.png)
 be the expected number of times the chain hits
 be the expected number of times the chain hits  . Because the chain is primitive, all of the
. Because the chain is primitive, all of the ![Rendered by QuickLaTeX.com \[\pi_i = \frac{a_i}{\sum_{j=1}^m a_j}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-6abd189895ccf9707e3178558a0b5a2c_l3.png)
![Rendered by QuickLaTeX.com \[a^\top P = a^\top. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-d6885d4de71a8b8655bd542c1434c775_l3.png)
 denote the values of the chain and
 denote the values of the chain and  denote the time at which the chain returns to
 denote the time at which the chain returns to ![Rendered by QuickLaTeX.com \[a_j = \sum_{n=1}^\infty \prob\{x_n = j, n_{\rm ret} > n\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-713cac4a8c97033b1cba87fe2632d46d_l3.png)
![Rendered by QuickLaTeX.com \[a_j = \prob\{x_1 = j\} + \sum_{n=2}^\infty \prob\{x_n = j, n_{\rm ret} > n\}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-1f62dfb50726f4884c2cb790970d1a75_l3.png)
 . For the second term, break into cases depending on the value of the chain at time
. For the second term, break into cases depending on the value of the chain at time  :
:
![Rendered by QuickLaTeX.com \[a_j = P_{ij} + \sum_{n=2}^\infty \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} P_{kj}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-f243b52714b96a419b94980a12f078ee_l3.png)
 to
 to  and exchange the order of summation to obtain
 and exchange the order of summation to obtain![Rendered by QuickLaTeX.com \[a_j = P_{ij} + \sum_{k\ne i} \left(\sum_{n=1}^\infty \prob\{x_n = k, n_{\rm ret} > n \}\right) P_{kj}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-cad1677f621e66a91f850ab352cedce0_l3.png)
 . Thus, since
. Thus, since  , we have
, we have![Rendered by QuickLaTeX.com \[a_j = a_iP_{ij} + \sum_{k\ne i} a_k P_{kj} = \sum_{k=1}^m a_k P_{kj},\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-cecaa5c06b9ce9d9c3f4aa814dfc2323_l3.png)
3 thoughts on “Markov Musings 1: The Fundamental Theorem”