Bennett's Inequality
   HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, Bennett's
inequality Inequality may refer to: * Inequality (mathematics), a relation between two quantities when they are different. * Economic inequality, difference in economic well-being between population groups ** Income inequality, an unequal distribution of i ...
provides an
upper bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is every element of . Dually, a lower bound or minorant of is defined to be an element of that is less ...
on the
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
that the sum of
independent random variables Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
deviates from its
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
by more than any specified amount. Bennett's inequality was proved by George Bennett of the
University of New South Wales The University of New South Wales (UNSW) is a public research university based in Sydney, New South Wales, Australia. It was established in 1949. The university comprises seven faculties, through which it offers bachelor's, master's and docto ...
in 1962.


Statement

Let be
independent random variables Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
with finite variance. Further assume
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
for all , and define S_n = \sum_^n \left _i - \operatorname(X_i)\right and \sigma^2 = \sum_^n \operatorname(X_i-\operatorname X_i)^2. Then for any , :\Pr\left( S_n > t \right) \leq \exp\left( - \frac h\left(\frac \right)\right), where and log denotes the natural logarithm.


Generalizations and comparisons to other bounds

For generalizations see Freedman (1975) and Fan, Grama and Liu (2012) for a martingale version of Bennett's inequality and its improvement, respectively.
Hoeffding's inequality In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wass ...
only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. However Hoeffding's inequality entails sub-Gaussian tails, whereas in general Bennett's inequality has Poissonian tails. Bennett's inequality is most similar to the Bernstein inequalities, the first of which also gives concentration in terms of the variance and almost sure bound on the individual terms. Bennett's inequality is stronger than this bound, but more complicated to compute. In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions.


Example

Suppose that each is an independent binary random variable with probability . Then Bennett's inequality says that: :\Pr\left( \sum_^n X_i > pn + t \right) \leq \exp\left( - np h\left(\frac\right)\right). For t \geq 10 np, h(\frac) \geq \frac \log \frac, so :\Pr\left( \sum_^n X_i > pn + t \right) \leq \left(\frac\right)^ for t \geq 10 np. By contrast,
Hoeffding's inequality In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wass ...
gives a bound of \exp(-2 t^2/n) and the first Bernstein inequality gives a bound of \exp(-\frac). For t \gg np, Hoeffding's inequality gives \exp(-\Theta(t^2/n)), Bernstein gives \exp(-\Theta(t)), and Bennett gives \exp(-\Theta(t \log \frac)).


See also

*
Concentration inequality In chemistry, concentration is the abundance of a constituent divided by the total volume of a mixture. Several types of mathematical description can be distinguished: '' mass concentration'', ''molar concentration'', ''number concentration'', an ...
- a summary of tail-bounds on random variables.


References

Probabilistic inequalities {{probability-stub