Published on December 29, 2017

Let \(X\) be a one-dimensional normal variable with mean 0 and standard deviation \(\sigma\). What is the limit of \(P(X>a|\sigma)\) when \(\sigma\) tends to infinity?

We can compute the probability \(P(X>a|\sigma)\) as an integral of the normal density function:

\[ P(X>a|\sigma)=\int_{a}^{+\infty}\frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx. \]

As \(\sigma\) goes to infinity, \(\frac{1}{\sqrt{2\pi}\sigma}\) goes to 0, and \(e^{-x^2/2\sigma^2}\) goes to 1. Their product, therefore, goes to 0, and so

\[ \lim_{\sigma\to+\infty}P(X>a|\sigma)= \int_{a}^{+\infty}\lim_{\sigma\to+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx =0. \]

But this does not make sense, does it? As the standard deviation increases, the normal distribution becomes more spread out, and the probability that \(X\) will exceed a fixed threshold \(a\) should increase, not fall down to 0.

Indeed, the same argument could be used to “show” that the probability \(P(X\leq a|\sigma)\) also tends to 0 as \(\sigma\) increases. So where does all the probability go?

The problem in our reasoning is swapping the integral and the limit operations, i.e. equating

\[ \lim_{\sigma\to+\infty}\int_{a}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx \]

to

\[ \int_{a}^{+\infty}\lim_{\sigma\to+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx. \]

But why exactly this is a problem requries a little digging.

We might suspect that the reason swapping the integral and the limit didn’t work is that we are integrating over an infinite interval \([a;+\infty)\); i.e. that our integral is *improper*.

One reason to think so is that we would be right to conclude that \[\lim_{\sigma\to+\infty} P(a\leq X\leq b|\sigma) = 0,\] where the integration happens over a finite interval \([a,b]\).

The improperness of the integral does play a role, as we shall see below; but by itself, properness is neither necessary nor sufficient to swap the integral and the limit.

A. Ya. Dorogovtsev in his mathematical analysis textbook gives the following example (section 13.2.3):

\[ f(x,y)=\frac1y\left(1-x^{1/y}\right)x^{1/y}, \]

where \(x\in[1/2,1]\) and \(y\in(0,1]\).

For any given \(y\), \(f(\cdot,y):[1/2,1]\to\mathbb{R}\) is a continuous function on a closed interval, so \(\int_{1/2}^1 f(x,y)dx\) is proper. And yet, it can be shown that

\[ \forall x\in[1/2,1]\; \lim_{y\to 0+} f(x,y) = 0 \] while \[ \lim_{y\to 0+}\int_{1/2}^1 f(x,y)dx = 1/2. \]

Let’s look at the graph of \(f(x,y)\) for different values of \(y\) to see what’s going on.

On the one hand, at any given \(x\), \(f(x,y)\) tends to 0. (Look at how the function value changes for a fixed \(x\), say, \(x=0.8\) or \(x=0.9\). It may be less obvious that the same is true for \(x=0.99\); you’ll have to trust that the trend continues or verify it analytically.)

On the other hand, the function as a whole does not seem to converge to 0. This is formalized by the notion of *uniform convergence*.

We say that \(g(x,y)\) converges *uniformly* to \(g(x)\) when \(y\to y_0\) iff the maximum vertical distance between the graphs of \(g(x,y)\) and \(g(x)\), i.e. \(\sup_x |g(x,y)-g(x)|\), goes to 0.

Uniform convergence on \([a,b]\) is sufficient (although not necessary) to replace \(\lim_{y\to y_0}\int_a^b g(x,y)dx\) with \(\int_a^b \lim_{y\to y_0}g(x,y)dx\).

Clearly, our \(f(x,y)\) converges to 0 non-uniformly: not only does \(\sup_x |f(x,y)|\) not tend to 0, it tends to infinity. Non-uniform convergence allows a travelling wave. Such a wave can contribute to the integral while escaping the point-wise convergence analysis (the function converges at every point, but only after the wave has passed).

Let’s revisit our first example, the limit \[ \lim_{\sigma\to+\infty}P(X>a|\sigma)dx = \lim_{\sigma\to+\infty}\int_{a}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx. \]

Here again there is a wave traveling to the right (and a symmetric one traveling to the left, not shown here).

Because the wave travels with a finite speed, we may suspect non-uniform convergence: no matter how large \(\sigma\) is, there are always distant enough points at which the density will temporarily increase as \(\sigma\) continues to increase.

And yet, the height of the wave diminishes, and \(\frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}\) does converge to 0 uniformly:

\[ \sup_x \left|\frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}\right|= \frac{1}{\sqrt{2\pi}\sigma}\to 0\quad (\sigma\to+\infty). \]

The reason why this is not enough to bring the limit under the integral is that here we are dealing with an improper integral. Improper integrals are themselves limits; recall that the improper integral is defined as

\[ \int_{a}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx = \lim_{b\to+\infty}\int_{a}^{b} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx. \]

Before we get to bring \(\lim_{\sigma\to+\infty}\) under \(\int_a^b\), we need to sneak it past \(\lim_{b\to+\infty}\) first. For that, the limit \[\lim_{b\to+\infty}\int_{a}^{b} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx\] itself needs to converge uniformly as a function of \(\sigma\); i.e. \[\sup_{\sigma} \left|\int_{b}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx\right|\to 0\quad (b\to+\infty).\]

Look, this is the same quantity that we started with: because the integral is a positive and increasing function of \(\sigma\),

\[ \begin{split} \sup_{\sigma} \left|\int_{b}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx\right|&= \lim_{\sigma\to+\infty}\int_{b}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-x^2/2\sigma^2}dx\\&= \lim_{\sigma\to+\infty}P(X>b|\sigma). \end{split} \]

If we assume that \(\forall b\:\lim_{\sigma\to+\infty}P(X>b|\sigma)=0\), then the integral converges uniformly, we can bring the limit under the integral, and prove that indeed, \(\forall a\:\lim_{\sigma\to+\infty}P(X>a|\sigma)=0\).

But there is no reason to believe that the limit is zero, and that explains why our calculation yielding 0 was wrong: the integral does not converge uniformly.

The actual calculation of \(\lim_{\sigma\to+\infty}P(X>a|\sigma)\) is simple: because \(\forall a>0\:\lim_{\sigma\to+\infty}P(-a\leq X\leq a|\sigma)=0\) (thanks to uniform convergence!), all the probability mass goes to infinity, and because of the symmetry, each tail gets a half of it; so

\[\lim_{\sigma\to+\infty}P(X>a|\sigma)=1/2.\]

But I thought it would be instructive to investigate why the naive calculation did not work, so here we are.