Big Ideas in Applied Math: The Schur Complement

Given the diversity of applications of mathematics, the field of applied mathematics lacks a universally accepted set of core concepts which most experts would agree all self-proclaimed applied mathematicians should know. Further, much mathematical writing is very carefully written, and many important ideas can be obscured by precisely worded theorems or buried several steps into a long proof.

In this series of blog posts, I hope to share my personal experience with some techniques in applied mathematics which I’ve seen pop up many times. My goal is to isolate a single particularly interesting idea and provide a simple explanation of how it works and why it can be useful. In doing this, I hope to collect my own thoughts on these topics and write an introduction to these ideas of the sort I wish I had when I was first learning this material.

Given my fondness for linear algebra, I felt an appropriate first topic for this series would be the Schur Complement. Given matrices A, B, C, and D of sizes n\times n, n\times m, m\times n, and m\times m with A invertible, the Schur complement is defined to be the matrix D - BA^{-1}C.

The Schur complement naturally arises in block Gaussian elimination. In vanilla Gaussian elimination, one begins by using the (1,1) entry of a matrix to “zero out” its column. Block Gaussian elimination extends this idea by using the n\times n submatrix occupying the top-left portion of a matrix to “zero out” all of the first n columns together. Formally, given the matrix \begin{bmatrix} A & B \\ C & D \end{bmatrix}, one can check by carrying out the multiplication that the following factorization holds:

(1)   \begin{equation*} \begin{bmatrix} A & B \\ C & D \end{bmatrix} = \begin{bmatrix} I_n & 0_{n\times m} \\ BA^{-1} & I_m\end{bmatrix} \begin{bmatrix} A & C \\ 0_{m\times n} & D - BA^{-1}C \end{bmatrix}. \end{equation*}

Here, we let I_j denote an identity matrix of size j\times j and 0_{j\times k} the j\times k zero matrix. Here, we use the notation of block (or partitioned) matrices where, in this case, a (m+n)\times (m+n) matrix is written out as a 2\times 2 “block” matrix whose entries themselves are matrices of the appropriate size that all matrices occurring in one block row (or column) have the same number of rows (or columns). Two block matrices which are blocked in a compatible way can be multiplied just like two regular matrices can be multiplied, taking care of the noncommutativity of matrix multiplication.

The Schur complement naturally in the expression for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. One can verify that for a block triangular matrix M = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}, we have the inverse formula

(2)   \begin{equation*} M^{-1} = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}^{-1} = \begin{bmatrix} M_{11}^{-1} & -M_{11}^{-1} M_{12}M_{22}^{-1} \\ 0_{m\times n} & M_{22}^{-1}\end{bmatrix}. \end{equation*}

(This can be verified by carrying out the block multiplication MM^{-1} for the proposed formula for M^{-1} and verifying that one obtains the identity matrix.) A similar formula holds for block lower triangular matrices. From here, we can deduce a formula for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. Let S = D - BA^{-1}C be the Schur complement. Then

(3)   \begin{equation*} \begin{split} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} &= \begin{bmatrix} A & C \\ 0_{m\times n} & S \end{bmatrix}^{-1} \begin{bmatrix} I_n & 0_{n\times m} \\ BA^{-1} & I_m\end{bmatrix}^{-1} \\ &= \begin{bmatrix} A^{-1} & -A^{-1}CS^{-1} \\ 0_{m\times n} & S^{-1} \end{bmatrix} \begin{bmatrix} I_n & 0_{n\times m} \\ -BA^{-1} & I_m\end{bmatrix} \\ &= \begin{bmatrix} A^{-1} + A^{-1}CS^{-1}BA^{-1} & -A^{-1}CS^{-1} \\ -S^{-1}BA^{-1} & S^{-1} \end{bmatrix}. \end{split} \end{equation*}

This remarkable formula gives the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix} in terms of A^{-1}, S^{-1}, B, and C. In particular, the (2,2)-block entry of \begin{bmatrix} A & B \\ C & D\end{bmatrix}^{-1} is simply just the inverse of the Schur complement.

Here, we have seen that if one starts with a large matrix and performs block Gaussian elimination, one ends up with a smaller matrix called the Schur complement whose inverse appears in inverse of the original matrix. Very often, however, it benefits us to run this trick in reverse: we begin with a small matrix, which we recognize to be the Schur complement of a larger matrix. In general, dealing with a larger matrix is more difficult than a smaller one, but very often this larger matrix will have special properties which allow us to more efficiently compute the inverse of the original matrix.

One beautiful application of this idea is the Sherman-Morrison-Woodbury matrix identity. Suppose we want to find the inverse of the matrix A - CD^{-1}B. Notice that this is the Schur complement of the matrix \begin{bmatrix} D & C \\ B & A \end{bmatrix}, which is the same \begin{bmatrix} A & B \\ C & D \end{bmatrix} after reordering.1Specifically, move the switch the first n rows with the last m rows and do the same with the columns. This defines a permutation matrix P = \begin{bmatrix} 0_{m\times n} & I_m \\ I_n & 0_{n\times m} \end{bmatrix} such that \begin{bmatrix} D & C \\ B & A \end{bmatrix} = P\begin{bmatrix} A & B \\ C & D \end{bmatrix}P^\top. Alternately, and perhaps more cleanly, one may define two Schur complements of the block matrix M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}: one by “eliminating A“, M/A = D - CA^{-1}B, and the other by “eliminating D“, M/D = A - BD^{-1}C. Following the calculation in Eq. (3), just like the inverse of the Schur complement D - BA^{-1}C appears in the (2,2) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}, the inverse of the alternate Schur complement A - CD^{-1}B can be shown to appear in the (1,1) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}. Thus, comparing with Eq. (3), we deduce the Sherman-Morrison-Woodbury matrix identity:

(4)   \begin{equation*} (A - CD^{-1}B)^{-1} = A^{-1} + A^{-1}C(D-BA^{-1}C)^{-1}BA^{-1}. \end{equation*}

To see how this formula can be useful in practice, suppose that we have a fast way of solving the system linear equations Ax = b. Perhaps A is a simple matrix like a diagonal matrix or we have already pre-computed an LU factorization for A. Consider the problem of solving the rank-one updated problem (A+uv^\top)x = b. Using the Sherman-Morrison-Woodbury identity with C=u, D=-1, and B = v^\top, we have that

(5)   \begin{equation*} x = (A+uv^\top)^{-1}b = A^{-1}b + A^{-1}u (-1-v^\top A^{-1}u)^{-1}v^\top A^{-1}b, \end{equation*}

Careful observation of this formula shows how we can compute x (solving (A+uv^\top)x = b) by only solving two linear systems Ax_1 = b for x_1 = A^{-1}b and Ax_2 = u for x_2 = A^{-1}u.2Further economies can be saved if one has already previously computed x_1, which may be the case in many applications.

Here’s another variant of the same idea. Suppose we want solve the linear system of equation (D + uv^\top)x = b where D is a diagonal matrix. Then we can immediately write down the lifted system of linear equations

(6)   \begin{equation*} \underbrace{\begin{bmatrix} -1 & v^\top \\ u & D \end{bmatrix}}_{:=M}\begin{bmatrix} y \\ x \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}. \end{equation*}

One can easily see that D+uv^\top is the Schur complement of the matrix M (with respect to the (1,1) block). This system of linear equations is sparse in the sense that most of its entries are zero and can be efficiently solved by sparse Gaussian elimination, for which there exists high quality software. Easy generalizations of this idea can be used to effectively solve many “sparse + low-rank” problems.

Another example of the power of the Schur complement are in least-squares problems. Consider the problem of minimizing \|Ax - b\|, where A is a matrix with full column rank and \|\cdot\| is the Euclidean norm of a vector \|x\|^2 = x^\top x. It is well known that the solution x satisfies the normal equations A^\top A x = A^\top b. However, if the matrix A is even moderately ill-conditioned, the matrix A^\top A will be much more ill-conditioned (the condition number will be squared), leading to a loss of accuracy. It is for this reason that it is preferable to solve the least-squares problem with QR factorization. However, if QR factorization isn’t available, we can use the Schur complement trick instead. Notice that A^\top A is the Schur complement of the matrix \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix}. Thus, we can solve the normal equations by instead solving the much better-conditioned system3More precisely, one should scale the identity in the (1,1) block of this system to be on the order of the size of the entries in A. If one selects a scale s which is lies in between A‘s largest and smallest singular values of A (for example s = \max_{i,j} |A_{ij}|) and constructs M = \begin{bmatrix} -sI_m & A \\ A^\top & 0_{n\times n}\end{bmatrix}, then one can show that the two-norm condition number of M no more than twice that of A.

(7)   \begin{equation*} \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix} \begin{bmatrix} r \\ x \end{bmatrix} = \begin{bmatrix} b \\ 0_n \end{bmatrix}. \end{equation*}

Not only is this system better-conditioned, but it’s also highly interpretable. Multiplying out the first block row gives the equation -r + Ax = b which simplifies to r = Ax - b. The unknown r is nothing but the least-squares residual. The second block row gives A^\top r = 0_n, which encodes the condition that the residual is orthogonal to the range of the matrix A. Thus, by lifting the normal equations to a large system of equations by means of the Schur complement trick, one derives an interpretable way of solving the least-squares problem by solving a linear system of equations, no QR factorization or ill-conditioned normal equations needed.

The Schur complement trick continues to have use in areas of more contemporary interest. For example, the Schur complement trick plays a central role in the theory of sequentially semiseparable matrices which is a precursor to many recent developments in rank-structured linear solvers. I have used the Schur complement trick myself several times in my work on graph-induced rank-structures.

Upshot: The Schur complement appears naturally when one does (block) Gaussian elimination on a matrix. One can also run this process in reverse: if one recognizes a matrix expression (involving a product of matrices potentially added to another matrix) as being a Schur complement of a larger matrix , one can often get considerable dividends by writing this larger matrix down. Examples include a proof of the Schur-Morrison-Woodbury matrix identity (see Eqs. (3-4)), techniques for solving a low-rank update of a linear system of equations (see Eqs. (5-6)), and a stable way of solving least-squares problems without the need to use QR factorization (see Eq. (7)).

One thought on “Big Ideas in Applied Math: The Schur Complement

Leave a Reply

Your email address will not be published. Required fields are marked *