HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, Hoeffding's inequality provides an
upper bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is every element of . Dually, a lower bound or minorant of is defined to be an element of that is less ...
on the
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
that the sum of bounded
independent random variables Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
deviates from its
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963. Hoeffding's inequality is a special case of the Azuma–Hoeffding inequality and McDiarmid's inequality. It is similar to the Chernoff bound, but tends to be less sharp, in particular when the variance of the random variables is small. It is similar to, but incomparable with, one of Bernstein's inequalities.


Statement

Let be
independent random variables Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
such that a_i \leq X_i \leq b_i
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
. Consider the sum of these random variables, :S_n = X_1 + \cdots + X_n. Then Hoeffding's theorem states that, for all , :\begin \operatorname \left(S_n - \mathrm\left _n \right\geq t \right) &\leq \exp \left(-\frac \right) \\ \operatorname \left(\left , S_n - \mathrm\left _n \right\right , \geq t \right) &\leq 2\exp \left(-\frac \right) \end Here is the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of . Note that the inequalities also hold when the have been obtained using sampling without replacement; in this case the random variables are not independent anymore. A proof of this statement can be found in Hoeffding's paper. For slightly better bounds in the case of sampling without replacement, see for instance the paper by .


Generalization

Let Y_1, \dots, Y_n be independent observations such that \operatorname (Y_i) = 0 and a_i \le Y_i \le b_i . Let \epsilon > 0. Then, for any t > 0,P \left( \sum_^n Y_i \ge \epsilon \right) \le \exp \left( - t \epsilon + \sum_^n t^2 (b_i - a_i)^2 / 8 \right)


Special Case: Bernoulli RVs

Suppose a_i = 0 and b_i = 1 for all ''i''. This can occur when ''Xi'' are independent Bernoulli random variables, though they need not be identically distributed. Then we get the inequality :\begin \operatorname\left(S_n - \mathrm\left _n\right\geq t\right) &\leq \exp(-2t^2/n)\\ \operatorname\left(\left, S_n - \mathrm\left _n\right \geq t\right) &\leq 2\exp(-2t^2/n) \end or equivalently, \begin \operatorname\left((S_n - \mathrm\left _n\right/n \geq t\right) &\leq \exp(-2 n t^2 )\\ \operatorname\left(\left, (S_n - \mathrm\left _n\right/n\ \geq t\right) &\leq 2\exp(-2 n t^2) \end for all t \geq 0. This is a version of the additive Chernoff bound which is more general, since it allows for random variables that take values between zero and one, but also weaker, since the Chernoff bound gives a better tail bound when the random variables have small variance.


General case of bounded from above random variables

Hoeffding's inequality can be extended to the case of bounded from above random variables. Let be
independent random variables Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
such that \mathrmX_=0 and X_i \leq b_i
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
. Denote by :\begin C_i^2= \left\{ \begin{array}{ll} \mathrm{E}X_{i}^2 , & \mathrm{if}\ \mathrm{E}X_{i}^2 \geq b^2_{i} , \\ \displaystyle\frac{1}{4}\left( b_{i} + \frac{\mathrm{E}X_{i}^2 }{ b_{i} }\right)^2, & \textrm{otherwise}. \end{array} \right. \end{align} Hoeffding's inequality for bounded from above random variables states that for all t \geq 0, : \mathrm{P}\left( \left, \sum_{i=1}^n X_i \ \geq t \right) \leq 2\exp\left( -\frac{ t^2}{2\sum_{i=1}^nC_i^2 } \right). In particular, if \mathrm{E}X_{i}^2 \geq b^2_{i} for all i, then for all t \geq 0, : \mathrm{P}\left( \left, \sum_{i=1}^n X_i \ \geq t \right) \leq 2\exp\left( -\frac{ t^2}{2\sum_{i=1}^n \mathrm{E}X_{i}^2 } \right).


General case of sub-Gaussian random variables

The proof of Hoeffding's inequality can be generalized to any sub-Gaussian distribution. Recall that a random variable is called sub-Gaussian, if :\mathrm{P}(, X, \geq t)\leq 2e^{-ct^2}, for some c>0. For any bounded variable , \mathrm{P}(, X, \geq t) = 0 \leq 2e^{-ct^2} for t > T for some sufficiently large . Then 2e^{-cT^2} \leq 2e^{-ct^2} for all t \leq T so taking c = \log(2) / T^2 yields :\mathrm{P}(, X, \geq t)\leq 1 \leq 2e^{-cT^2} \leq 2e^{-ct^2}, for t \leq T. So every bounded variable is sub-Gaussian. For a random variable , the following norm is finite if and only if is sub-Gaussian: :\Vert X \Vert_{\psi_2} := \inf\left\{c\geq 0: \mathrm{E} \left( e^{X^2/c^2} \right) \leq 2\right\}. Then let be independent sub-Gaussian random variables, the general version of the Hoeffding's inequality states that: : \mathrm{P}\left( \left, \sum_{i=1}^n (X_i - E _i \ \geq t \right) \leq 2\exp\left( -\frac{ct^2}{\sum_{i=1}^n \Vert X_i \Vert^2_{\psi_2 \right), where ''c'' > 0 is an absolute constant. Equivalently, we can define sub-Gaussian distributions by variance proxy, defined as follows. If there exists some s^2 such that \operatorname{E} ^{(X-\operatorname{E}[Xt}">.html" ;"title="^{(X-\operatorname{E}[X">^{(X-\operatorname{E}[Xt}\leq e^{\frac{s^2t^2}{2 for all t, then s^2 is called a ''variance proxy'', and the smallest such s^2 is called the ''optimal variance proxy'', and denoted by \Vert X\Vert_{\mathrm{vp^2. In this form, Hoeffding's inequality states \mathrm{P}\left( \left, \sum_{i=1}^n (X_i - E _i \ \geq t \right) \leq 2\exp\left( -\frac{t^2}{2\sum_{i=1}^n \, X_i \, ^2_{\mathrm{vp} \right),


Proof

We quote: The proof of Hoeffding's inequality then follows similarly to concentration inequalities like Chernoff bounds. This proof easily generalizes to the case for sub-Gaussian distributions with variance proxy.


Usage


Confidence intervals

Hoeffding's inequality can be used to derive confidence intervals. We consider a coin that shows heads with probability and tails with probability . We toss the coin times, generating samples X_1,\ldots,X_n (which are Independent and identically distributed random variables, i.i.d Bernoulli random variables). The expected value, expected number of times the coin comes up heads is . Furthermore, the probability that the coin comes up heads at least times can be exactly quantified by the following expression: :\operatorname{P}( H(n) \geq k)= \sum_{i=k}^n \binom{n}{i} p^i (1-p)^{n-i}, where is the number of heads in coin tosses. When for some , Hoeffding's inequality bounds this probability by a term that is exponentially small in : :\operatorname{P}( H(n) - pn >\varepsilon n)\leq\exp\left(-2\varepsilon^2 n\right). Since this bound holds on both sides of the mean, Hoeffding's inequality implies that the number of heads that we see is concentrated around its mean, with exponentially small tail. :\operatorname{P}\left(, H(n) - pn, >\varepsilon n\right)\leq 2\exp\left(-2\varepsilon^2 n\right). Thinking of \overline{X} = \frac{1}{n}H(n) as the "observed" mean, this probability can be interpreted as the level of significance \alpha (probability of making an error) for a confidence interval around p of size 2: :\alpha=\operatorname{P}(\ \overline{X} \notin -\varepsilon, p+\varepsilon \leq 2e^{-2\varepsilon^2n} Finding for opposite inequality sign in the above, i.e. that violates inequality but not equality above, gives us: :n\geq \frac{\log(2/\alpha)}{2\varepsilon^2} Therefore, we require at least \textstyle \frac{\log(2/\alpha)}{2\varepsilon^2} samples to acquire a \textstyle (1-\alpha)-confidence interval \textstyle p \pm \varepsilon. Hence, the cost of acquiring the confidence interval is sublinear in terms of confidence level and quadratic in terms of precision. Note that there are more efficient methods of estimating a confidence interval.


See also

* Concentration inequality – a summary of tail-bounds on random variables. * Hoeffding's lemma * Bernstein inequalities (probability theory)


Notes


References

* * * * * *

{{refend Probabilistic inequalities