Asymptotic distribution

(Redirected from Asymptotic variance)

In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing approximations to the cumulative distribution functions of statistical estimators.

Definition

edit

A sequence of distributions corresponds to a sequence of random variables Zi for i = 1, 2, ..., I . In the simplest case, an asymptotic distribution exists if the probability distribution of Zi converges to a probability distribution (the asymptotic distribution) as i increases: see convergence in distribution. A special case of an asymptotic distribution is when the sequence of random variables is always zero or Zi = 0 as i approaches infinity. Here the asymptotic distribution is a degenerate distribution, corresponding to the value zero.

However, the most usual sense in which the term asymptotic distribution is used arises where the random variables Zi are modified by two sequences of non-random values. Thus if

 

converges in distribution to a non-degenerate distribution for two sequences {ai} and {bi} then Zi is said to have that distribution as its asymptotic distribution. If the distribution function of the asymptotic distribution is F then, for large n, the following approximations hold

 
 

If an asymptotic distribution exists, it is not necessarily true that any one outcome of the sequence of random variables is a convergent sequence of numbers. It is the sequence of probability distributions that converges.

Central limit theorem

edit

Perhaps the most common distribution to arise as an asymptotic distribution is the normal distribution. In particular, the central limit theorem provides an example where the asymptotic distribution is the normal distribution.

Central limit theorem
Suppose   is a sequence of i.i.d. random variables with   and  . Let   be the average of  . Then as   approaches infinity, the random variables   converge in distribution to a normal  :[1]

The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.

Local asymptotic normality

edit

Local asymptotic normality is a generalization of the central limit theorem. It is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of independent and identically distributed sampling from a regular parametric model; this is just the central limit theorem.

Barndorff-Nielson & Cox provide a direct definition of asymptotic normality.[2]

See also

edit

References

edit
  1. ^ Billingsley, Patrick (1995). Probability and Measure (Third ed.). John Wiley & Sons. p. 357. ISBN 0-471-00710-2.
  2. ^ Barndorff-Nielsen, O. E.; Cox, D. R. (1989). Asymptotic Techniques for Use in Statistics. Chapman and Hall. ISBN 0-412-31400-2.