Let
be a standard Gaussian vector—that is, a vector populated by independent standard normal random variables. What is the expected length
of
? (Here, and throughout,
denotes the Euclidean norm of a vector.) The length of
is the square root of the sum of
independent standard normal random variables
![]()
which is known as a
random variable with
degrees of freedom. (Not to be confused with a
random variable!) Its mean value is given by the rather unpleasant formula
![]()
where
is the gamma function. If you are familiar with the definition of the gamma function, the derivation of this formula is not too hard—it follows from a change of variables to
-dimensional spherical coordinates.
This formula can be difficult to interpret and use. Fortunately, we have the rather nice bounds
(1) ![]()
This result appears, for example, page 11 of this paper. These bounds show that, for large
,
is quite close to
. The authors of the paper remark that this inequality can be proved by induction. I had difficulty reproducing the inductive argument for myself. Fortunately, I found a different proof which I thought was very nice, so I thought I would share it here.
Our core tool will be Wendel’s inequality (see (7) in Wendel’s original paper): For
and
, we have
(2) ![]()
Let us first use Wendel’s inequality to prove (1). Indeed, invoke Wendel’s inequality with
and
and multiply by
to obtain
![]()
which simplifies directly to (1).
Now, let’s prove Wendel’s inequality (2). The key property for us will be the strict log-convexity of the gamma function: For real numbers
and
,
(3) ![]()
We take this property as established and use it to prove Wendel’s inequality. First, use the log-convexity property (3) with
to obtain
![]()
Divide by
and use the property that
to conclude
(4) ![]()
This proves the upper bound in Wendel’s inequality (2). To prove the lower bound, invoke the upper bound (4) with
in place of
and
in place of
to obtain
![]()
Multiplying by
, dividing by
, and using
again yields
![]()
finishing the proof of Wendel’s inequality.
Notes. The upper bound in (1) can be proven directly by Lyapunov’s inequality:
, where we use the fact that
is the sum of
random variables with mean one. The weaker lower bound
follows from a weaker version of Wendel’s inequality, Gautschi’s inequality.
After the initial publication of this post, Sam Buchanan mentioned another proof of the lower bound
using the Gaussian Poincaré inequality. This inequality states that, for a function
,
![]()
To prove the lower bound, set
which has gradient
. Thus,
![]()
Rearrange to obtain
.
One thought on “Note to Self: Norm of a Gaussian Random Vector”