This post is part of a new series for this blog, Note to Self, where I collect together some notes about an idea related to my research. This content may be much more technical than most of the content of this blog and of much less wide interest. My hope in sharing this is that someone will find this interesting and useful for their own work.
This post is about a fundamental tool of high-dimensional probability, the Hanson–Wright inequality. The Hanson–Wright inequality is a concentration inequality for quadratic forms of random vectors—that is, expressions of the form
where
is a random vector. Many statements of this inequality in the literature have an unspecified constant
; our goal in this post will be to derive a fairly general version of the inequality with only explicit constants.
The core object of the Hanson–Wright inequality is a subgaussian random variable. A random variable
is subgaussian if the probability it exceeds a threshold
in magnitude decays as
(1) ![Rendered by QuickLaTeX.com \[\mathbb{P}\{|Y|\ge t\} \le \mathrm{e}^{-t^2/a} \quad \text{for some $a>0$ and for all sufficiently large $t$.} \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-7453b4f4aa8327540a8d0be4eafa7479_l3.png)
The name subgaussian is appropriate as the tail probabilities of Gaussian random variables exhibit the same square-exponential decrease
.
A (non-obvious) fact is that if
is subgaussian in the sense (1) and centered (
), then
‘s cumulant generating function (cgf)
![Rendered by QuickLaTeX.com \[\xi_Y(t) := \log \mathbb{E} \exp(tY).\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-d6fa76d6f411dfeb17aa493ecc43075f_l3.png)
is subquadratic: There is a constant
(independent of
and
), for which
(2) ![Rendered by QuickLaTeX.com \[\xi_Y(t) \le ca t^2 \quad \text{for all $t\in\mathbb{R}$}. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-9f2878a2e99913192732dc81883555c8_l3.png)
Moreover, a subquadratic cgf (2) also implies the subgaussian tail property (1), with a different parameter
.
Since properties (1) and (2) are equivalent (up to a change in the parameter
), we are free to fix a version of property (2) as our definition for a (centered) subgaussian random variable.
Definition (subgaussian random variable): A centered random variable
is said to be
-subgaussian or subgaussian with variance proxy
if its cgf is subquadratic:
(3) ![Rendered by QuickLaTeX.com \[\xi_{x}(t) \le\frac{1}{2} vt^2 \quad \text{for all $t\in\mathbb{R}$.} \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-9a76a0e53df54dcfc3205e25bf224663_l3.png)
For instance, a mean-zero Gaussian random variable
with variance
has cgf
(4) ![Rendered by QuickLaTeX.com \[ \xi_X(t) = \frac{1}{2} \sigma^2 t^2, \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-3fcb393a2c6684dcfa2105ca7931cb76_l3.png)
and is thus subgaussian with variance proxy
equal to its variance.
Here is a statement of the Hanson–Wright inequality as it typically appears with unspecified constants (see Theorem 6.2.1 of Vershynin’s High-Dimensional Probability):
Theorem (Hanson–Wright): Let
be a random vector with independent centered
-subgaussian entries and let
be a square matrix. Then
![Rendered by QuickLaTeX.com \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E} \left[x^\top A x\right]\right|\ge t \right\} \le 2\exp\left(- \frac{c\cdot t^2}{v^2\left\|A\right\|_{\rm F}^2 + v\left\|A\right\|t} \right),\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-31bca15e383c92331bacdcb8afdf7961_l3.png)
where
is a constant (not depending on
,
,
, or
).
This type of concentration is exactly the same type as provided by Bernstein’s inequality (which I discussed in my post on concentration inequalities). In particular, for small deviations
, the tail probabilities decay are subgaussian with variance proxy
:
![Rendered by QuickLaTeX.com \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E}\left[x^\top Ax\right]\right|\ge t \right\} \stackrel{\text{small $t$}}{\lessapprox} 2\exp\left(- \frac{c\cdot t^2}{v^2\left\|A\right\|_{\rm F}^2} \right)\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-3c76c19d1df0612e620a39e1e9ba4344_l3.png)
For large deviations
, this switches to subexponential tail probabilities with decay rate
:
![Rendered by QuickLaTeX.com \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E}\left[x^\top Ax\right]\right|\ge t \right\} \stackrel{\text{large $t$}}{\lessapprox} 2\exp\left(- \frac{c\cdot t}{v\|A\|} \right).\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-b6ea6afb685df53a9be83eb6091119f2_l3.png)
Mediating these two parameter regimes are the size of the matrix
, as measured by its Frobenius and spectral norms, and the degree of subgaussianity of
, measured by the variance proxy
.
Diagonal-Free Hanson–Wright
Now we come to a first version of the Hanson–Wright inequality with explicit constants, first for a matrix which is diagonal-free—that is, having all zeros on the diagonal. I obtained this version of the inequality myself, though I am very sure that this version of the inequality or an improvement thereof appears somewhere in the literature.
Theorem (Hanson–Wright, explicit constants, diagonal-free): Let
random vector with independent centered
-subguassian entries and let
be a diagonal-free square matrix. Then we have the cgf bound
![Rendered by QuickLaTeX.com \[\xi_{x^\top Ax}(t) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-f602716809a7d69835c6f33518103b23_l3.png)
As a consequence, we have the concentration bound ![Rendered by QuickLaTeX.com \[\mathbb{P} \{ x^\top A x \ge t \} \le \exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right).\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-6ca1e88b2fd841da5420f80b7e40eeff_l3.png)
Let us begin proving this result. Our proof will follow the same steps as Vershynin’s proof in High-Dimensional Probability (which in turn is adapted from an article by Rudelson and Vershynin), but taking care to get explicit constants. Unfortunately, proving all of the relevant tools from first principles would easily triple the length of this post, so I make frequent use of results from the literature.
We begin by the decoupling bound (Theorem 6.1.1 in Vershynin’s High-Dimensional Probability), which allows us to replace one
with an independent copy
at the cost of a factor of four:
(5) ![Rendered by QuickLaTeX.com \[\xi_{x^\top Ax}(t) \le \xi_{\tilde{x}^\top Ax}(4t). \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-6bd57cd1c44b329b0c3bda31e76eefec_l3.png)
We seek to compare the bilinear form
to the Gaussian bilinear form
where
and
are independent standard Gaussian vectors. We begin with the following cgf bound for the Gaussian quadratic form
:
![Rendered by QuickLaTeX.com \[\xi_{g^\top Ag}(t) \le \frac{\left\|A\right\|_{\rm F}^2 \, t^2}{1-2\|A\|\, t}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-8f7642cbce0b68c2c01fb5f3e293b608_l3.png)
This equation is the result of Example 2.12 in Boucheron, Lugosi, and Massart’s Concentration Inequalities. By applying this result to the Hermitian dilation of
in
‘s place, one obtains a similar result for the decoupled bilinear form
:
(6) ![Rendered by QuickLaTeX.com \[\xi_{\tilde{g}^\top Ag}(t) \le \frac{\left\|A\right\|_{\rm F}^2 \, t^2}{2(1-\|A\|\, t)}. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-8e6c29fadd08a3bbeb41050371c6044c_l3.png)
We now seek to compare
to
. To do this, we first evaluate the cgf of
only over the randomness in
. Since we’re only taking an expectation over the random variable
, we can apply the subquadratic tail condition (3) to obtain
(7) ![Rendered by QuickLaTeX.com \[\log \mathbb{E}_{\tilde{x}} \exp(t \, \tilde{x}^\top Ax) = \sum_{i=1}^n \log \mathbb{E}_{\tilde{x}} \exp(t \,\tilde{x}_i (Ax)_i) \le \frac{1}{2} v \left(\sum_{i=1}^n (Ax)_i^2\right)t^2 \le \frac{1}{2} v\left\|Ax\right\|^2 \, t^2. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-54092d507d1582fe4093df1e03066cb8_l3.png)
Now we perform a similar computation for the quantity
in which
has been replaced by the Gaussian vector
:
![Rendered by QuickLaTeX.com \[\log \mathbb{E}_{\tilde{g}} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax) = \frac{1}{2} v \left\|Ax\right\|^2 \, t^2.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-d55fea512948eae43c1806a1681e77f9_l3.png)
We stress that this is an equality since the cgf of a Gaussian random variable is given by (4). Thus we can substitute the left-hand side of the above display into the right-hand side of (7), yielding
(8) ![Rendered by QuickLaTeX.com \[\log \mathbb{E}_{\tilde{x}} \exp(t \, \tilde{x}^\top Ax) \le \log \mathbb{E}_{\tilde{g}} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax). \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-7f52f9f8ca4dfb3fccdce43a42d79946_l3.png)
We now perform this same trick again using the randomness in
:
(9) ![Rendered by QuickLaTeX.com \[\log \mathbb{E}_{\tilde{g},x} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax) \le \log \mathbb{E}_{\tilde{g}} \exp \left(\frac{1}{2} v^2 \left\|A^\top \tilde{g}\right\|^2t^2\right) = \log \mathbb{E}_{\tilde{g},g} \exp(v t \, \tilde{g}^\top Ag). \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-16375852aee93aeb1227e3b7f7e7d34c_l3.png)
Packaging up (8) and (9) gives
(10) ![Rendered by QuickLaTeX.com \[\xi_{\tilde{x}^\top Ax}(t)\le \xi_{\tilde{g}^\top Ag}(vt). \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-7983a7682993bf203d146829d4981aa1_l3.png)
Combining all these results (5), (6), and (10), we obtain
![Rendered by QuickLaTeX.com \[\xi_{x^\top Ax}(t) \le \xi_{\tilde{x}^\top Ax}(4t) \le \xi_{\tilde{g}^\top Ag}(4vt) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-05b750271f37701fdb330b9888243936_l3.png)
This cgf implies the desired probability bound as a consequence of the following fact (see Boucheron, Lugosi, and Massart’s Concentration Inequalities page 29 and Exercise 2.8):
Fact (Bernstein concentration from Bernstein cgf bound): Suppose that a random variable
satisfies the cgf bound
for
. Then
![Rendered by QuickLaTeX.com \[\mathbb{P} \left\{ X\ge t \right\} \le \exp\left( -\frac{t^2/2}{v+ct} \right).\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-7bb42060cd1876fde0beed6d72757127_l3.png)
General Hanson–Wright
Now, here’s a more general result (with worse constants) which permits the matrix
to possess a diagonal.
Theorem (Hanson–Wright, explicit constants): Let
random vector with independent centered
-subguassian entries and let
be an arbitrary square matrix. Then we have the cgf bound
![Rendered by QuickLaTeX.com \[\xi_{x^\top Ax-\mathbb{E} [x^\top A x]}(t) \le \frac{40v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-47b3b79322068ba04c7e382b3c6aa820_l3.png)
As a consequence, we have the concentration bound ![Rendered by QuickLaTeX.com \[\mathbb{P} \{ x^\top A x-\mathbb{E} [x^\top A x] \ge t \} \le \exp\left( -\frac{t^2/2}{40v^2 \left\|A\right\|_{\rm F}^2+8v\left\|A\right\|t} \right).\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-5ad014ca7d5c576fc599ab40ff006f52_l3.png)
Decompose the matrix
into its diagonal and off-diagonal portions. For any two random variables
and
(possibly highly dependent), we can bound the cgf of their sum using the following “union bound”:
(11) ![Rendered by QuickLaTeX.com \begin{align*} \xi_{X+Y}(t) &= \log \mathbb{E} \left[\exp(tX)\exp(tY)\right] \\&\le \log \left(\left[\mathbb{E} \exp(2tX)\right]^{1/2}\left[\mathbb{E}\exp(2tY)\right]^{1/2}\right) \\&=\frac{1}{2} \xi_X(2t) + \frac{1}{2}\xi_Y(2t). \end{align*}](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-fd0d12c1cf5ef7ce41e0ea705339afec_l3.png)
The two equality statements are the definition of the cumulant generating function and the inequality is Cauchy–Schwarz.
Using the “union bound”, it is sufficient to obtain bounds for the cgfs of the diagonal and off-diagonal parts
and
. We begin with the diagonal part. We compute
(12) ![Rendered by QuickLaTeX.com \begin{align*}\xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(t) &= \log \mathbb{E} \exp\left(t \sum_{i=1}^n A_{ii}(x_i^2 - \mathbb{E}[x_i^2]) \right) \\ &= \sum_{i=1}^n \log \mathbb{E} \exp\left((t A_{ii})\cdot(x_i^2 - \mathbb{E}[x_i^2]) \right). \end{align*}](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-254d47dad6a87bf6975eb6f1e4df281c_l3.png)
For the cgf of
, we use the following bound, taken from Appendix B of the following paper:
![Rendered by QuickLaTeX.com \[\log \mathbb{E} \exp\left(t(x_i^2 - \mathbb{E}[x_i^2]) \right) \le \frac{8v^2t^2}{1-2v|t|}.\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-8f85f8179c3c5930830e29ccdb9c20f2_l3.png)
Substituting this result into (12) gives
(13) ![Rendered by QuickLaTeX.com \[\xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(t) \le \sum_{i=1}^n \frac{8v^2|A_{ii}|^2t^2}{1-2v|A_{ii}|t} \le \frac{8v^2\|A\|_{\rm F}^2t^2}{1-2v\|A\|t}\quad \text{for $t>0$}. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-897d21b83613b602e593c2668991d709_l3.png)
For the second inequality, we used the facts that
and
.
We now look at the off-diagonal part
. We use a version of the decoupling bound (5) where we compare
to
, where we’ve both replaced one copy of
with an independent copy and reinstated the diagonal of
(see Remark 6.1.3 in Vershynin’s High-Dimensional Probability):
![Rendered by QuickLaTeX.com \[\xi_{x^\top F x}(t) \le \xi_{\tilde{x}^\top Ax}(4t).\]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-dcb9e779c41a3de33d63d677e0aa56fd_l3.png)
We can now just repeat the rest of the argument for the diagonal-free Hanson–Wright inequality, yielding the same conclusion
(14) ![Rendered by QuickLaTeX.com \[ \xi_{x^\top Fx}(t) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}. \]](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-c586d940ec1f9ca23bb1f8b95055f5ea_l3.png)
Combining (11), (13), and (14), we obtain
![Rendered by QuickLaTeX.com \begin{align*}\xi_{x^\top Ax-\mathbb{E} [x^\top A x]} &\le \frac{1}{2} \xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(2t) + \frac{1}{2} \xi_{x^\top Fx}(2t) \\&\le \frac{8v^2\|A\|_{\rm F}^2t^2}{2(1-4v\|A\|t)} + \frac{32v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left|A\right|t)} \\&\le \frac{8v^2\|A\|_{\rm F}^2t^2}{2(1-4v\|A\|t)} + \frac{32v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)} \\&\le \frac{40v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)}.\end{align*}](https://www.ethanepperly.com/wp-content/ql-cache/quicklatex.com-ee18a7bd4c01d2ecfb2c056632878e12_l3.png)
As with above, this cgf bound implies the desired probability bound.