Central Limit Theorem

From MM*Stat International

Jump to: navigation, search
English
Português
Français
‎Español
Italiano
Nederlands


One property of the normal distribution is that the sum of n independent random variables with Normal distribution, is also normally distributed. This property remains true for any value of n. If the random variables  are not normally distributed, then this property is not exactly true, but it remains approximately correct for large n. Let  be independently and identically distributed random variables with E() =  and Var() =   for i = 1,,n. Then the sum of these random variables is for large n approximately normally distributed:E() = n and Var() = n  N(n,n),where  means approximately for large n. Let be independently and identically distributed random variables with E() = and Var() = for i = 1,,n. Then the mean of these random variables is for large n approximately normally distributed:E =E= and Var = N(,),where means approximately for large n This result requires that none of the random variables are responsible for most of the variance.The distribution N() depends on the number of the summands n and for infinite n it would have infinite and infinite variance. The meaning of this theorem can be described more clearly if we use standardized sums of random variables.

Central Limit Theorem

Let be independent and identically distributed random variables: E() = and Var() = > 0 . Then the distribution function of converges as n to a standard normal distribution: The “standardized ” random variable is approximately distributed as a standard Normal distribution: . In this example, we will try to illustrate the principle of the Central Limit Theorem. Let us consider continuous random variables random variables which are independently and identically uniformly distributed on the interval : The expected value and the variance are: Let us consider a sequence of the sum of these variables; the index of the variable denotes the number of observations in the sample: For example, for , , and we get:. and the densities: All these densities are plotted in the following figure, which also contains a plot of a density for comparison:

Nl s2 27 f 7.gif

The convergence towards of these distributions to a Normal density can be clearly seen. As the number of observations increases the distribution becomes more similar to a Normal distribution. In fact, for we can hardly see any differences. The (Lindeberg and Lévy) Central Limit Theorem is the main reason that the Normal distribution is so commonly used. The practical usefulness of this theorem derives from the fact that a sample of identically distributed independent random variables has an approximately Normal distribution as the sample increases (usually This theorem becomes particularly important when the deriving the sampling distribution of the statistics. The convergence towards the Normal distribution will be very quick if the distribution of the random variables is symmetric. If the distribution is not symmetric, then the convergence will be much slower. The Central Limit Theorem has many various generalizations (e.g. Lyapunov CLT for independent, but not identically distributed random variables). Furthermore, there are also limit theorems that describe convergence towards other sorts of distributions.