Let be a standard Gaussian vector—that is, a vector populated by independent standard normal random variables. What is the expected length of ? (Here, and throughout, denotes the Euclidean norm of a vector.) The length of is the square root of the sum of independent standard normal random variables
where is the gamma function. If you are familiar with the definition of the gamma function, the derivation of this formula is not too hard—it follows from a change of variables to -dimensional spherical coordinates.
This formula can be difficult to interpret and use. Fortunately, we have the rather nice bounds
This result appears, for example, page 11 of this paper. These bounds show that, for large , is quite close to . The authors of the paper remark that this inequality can be probed by induction. I had difficulty reproducing the inductive argument for myself. Fortunately, I found a different proof which I thought was very nice, so I thought I would share it here.
Let us first use Wendel’s inequality to prove (1). Indeed, invoke Wendel’s inequality with and and multiply by to obtain
which simplifies directly to (1).
Now, let’s prove Wendel’s inequality (2). The key property for us will be the strict log-convexity of the gamma function: For real numbers and ,
We take this property as established and use it to prove Wendel’s inequality. First, use the log-convexity property (3) with to obtain
Divide by and use the property that to conclude
This proves the upper bound in Wendel’s inequality (2). To prove the lower bound, invoke the upper bound (4) with in place of and in place of to obtain
Multiplying by , dividing by , and using again yields
finishing the proof of Wendel’s inequality.
Notes. The upper bound in (1) can be proven directly by Lyapunov’s inequality: , where we use the fact that is the sum of random variables with mean one. The weaker lower bound follows from a weaker version of Wendel’s inequality, Gautschi’s inequality.
To prove the lower bound, set which has gradient . Thus,
Rearrange to obtain .