 #jsDisabledContent { display:none; } My Account | Register | Help Flag as Inappropriate This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Article Email Address:

# Berry–Esseen theorem

Article Id: WHEBN0000508012
Reproduction Date:

 Title: Berry–Esseen theorem Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:

### Berry–Esseen theorem

In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity. Under stronger assumptions, the Berry–Esseen theorem, or Berry–Esseen inequality, gives a more quantitative result, because it also specifies the rate at which this convergence takes place by giving a bound on the maximal error of approximation between the normal distribution and the true distribution of the scaled sample mean. The approximation is measured by the Kolmogorov–Smirnov distance. In the case of independent samples, the convergence rate is n−1/2, where n is the sample size, and the constant is estimated in terms of the third absolute normalized moments.

## Contents

• Statement of the theorem 1
• Identically distributed summands 1.1
• Non-identically distributed summands 1.2
• References 3

## Statement of the theorem

Statements of the theorem vary, as it was independently discovered by two mathematicians, Andrew C. Berry (in 1941) and Carl-Gustav Esseen (1942), who then, along with other authors, refined it repeatedly over subsequent decades.

### Identically distributed summands

One version, sacrificing generality somewhat for the sake of clarity, is the following:

There exists a positive constant C such that if X1, X2, ..., are i.i.d. random variables with E(X1) = 0, E(X12) = σ2 > 0, and E(|X1|3) = ρ < ∞, and if we define
Y_n = {X_1 + X_2 + \cdots + X_n \over n}
the sample mean, with Fn the cumulative distribution function of
{Y_n \sqrt{n} \over {\sigma}},
and Φ the cumulative distribution function of the standard normal distribution, then for all x and n,
\left|F_n(x) - \Phi(x)\right| \le {C \rho \over \sigma^3\,\sqrt{n}}.\ \ \ \ (1) Illustration of the difference in cumulative distribution functions alluded to in the theorem.

That is: given a sequence of independent and identically-distributed random variables, each having mean zero and positive variance, if additionally the third absolute moment is finite, then the cumulative distribution functions of the standardized sample mean and the standard normal distribution differ (vertically, on a graph) by no more than the specified amount. Note that the approximation error for all n (and hence the limiting rate of convergence for indefinite n sufficiently large) is bounded by the order of n−1/2.

Calculated values of the constant C have decreased markedly over the years, from the original value of 7.59 by Esseen (1942), to 0.7882 by van Beek (1972), then 0.7655 by Shiganov (1986), then 0.7056 by Shevtsova (2007), then 0.7005 by Shevtsova (2008), then 0.5894 by Tyurin (2009), then 0.5129 by Korolev & Shevtsova (2009), then 0.4785 by Tyurin (2010). The detailed review can be found in the papers Korolev & Shevtsova (2009), Korolev & Shevtsova (2010). The best estimate as of 2012, C < 0.4748, follows from the inequality

\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le {0.33554 (\rho+0.415\sigma^3)\over \sigma^3\,\sqrt{n}},

due to Shevtsova (2011), since σ3 ≤ ρ and 0.33554 · 1.415 < 0.4748. However, if ρ ≥ 1.286σ3, then the estimate

\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le {0.3328 (\rho+0.429\sigma^3)\over \sigma^3\,\sqrt{n}},

which is also proved in Shevtsova (2011), gives an even tighter upper estimate.

Esseen (1956) proved that the constant also satisfies the lower bound

C\geq\frac{\sqrt{10}+3}{6\sqrt{2\pi}} \approx 0.40973 \approx \frac{1}{\sqrt{2\pi}} + 0.01079 .

### Non-identically distributed summands

Let X1, X2, ..., be independent random variables with E(Xi) = 0, E(Xi2) = σi2 > 0, and E(|Xi|3) = ρi < ∞. Also, let
S_n = {X_1 + X_2 + \cdots + X_n \over \sqrt{\sigma_1^2+\sigma_2^2+\cdots+\sigma_n^2} }
be the normalized n-th partial sum. Denote Fn the cdf of Sn, and Φ the cdf of the standard normal distribution. For the sake of convenience denote
\vec{\sigma}=(\sigma_1,\ldots,\sigma_n),\ \vec{\rho}=(\rho_1,\ldots,\rho_n).
In 1941, Andrew C. Berry proved that for all n there exists an absolute constant C1 such that
\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le C_1\cdot\psi_1,\ \ \ \ (2)
where
\psi_1=\psi_1\big(\vec{\sigma},\vec{\rho}\big)=\Big({\textstyle\sum\limits_{i=1}^n\sigma_i^2}\Big)^{-1/2}\cdot\max_{1\le i\le n}\frac{\rho_i}{\sigma_i^2}.
Independently, in 1942, Carl-Gustav Esseen proved that for all n there exists an absolute constant C0 such that
\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le C_0\cdot\psi_0, \ \ \ \ (3)
where
\psi_0=\psi_0\big(\vec{\sigma},\vec{\rho}\big)=\Big({\textstyle\sum\limits_{i=1}^n\sigma_i^2}\Big)^{-3/2}\cdot\sum\limits_{i=1}^n\rho_i.

It is easy to make sure that ψ0≤ψ1. Due to this circumstance inequality (3) is conventionally called the Berry–Esseen inequality, and the quantity ψ0 is called the Lyapunov fraction of the third order. Moreover, in the case where the summands X1,... Xn have identical distributions

\psi_0=\psi_1=\frac{\rho_1}{\sigma_1^3\sqrt{n}},

and thus the bounds stated by inequalities (1), (2) and (3) coincide.

Regarding C0, obviously, the lower bound established by Esseen (1956) remains valid:

C_0\geq\frac{\sqrt{10}+3}{6\sqrt{2\pi}} = 0.4097\ldots.

The upper bounds for C0 were subsequently lowered from the original estimate 7.59 due to Esseen (1942) to (we mention the recent results only) 0.9051 due to Zolotarev (1967), 0.7975 due to van Beek (1972), 0.7915 due to Shiganov (1986), 0.6379 and 0.5606 due to Tyurin (2009) and Tyurin (2010). As of 2011 the best estimate is 0.5600 obtained by Shevtsova (2010).