Kolmogorov inequality
   HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the
partial sum In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathemati ...
s of a finite collection of independent random variables exceed some specified bound.


Statement of the inequality

Let ''X''1, ..., ''X''''n'' : Ω → R be
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s defined on a common
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models ...
(Ω, ''F'', Pr), with
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
E 'X''''k''nbsp;= 0 and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
Var 'X''''k''nbsp;< +∞ for ''k'' = 1, ..., ''n''. Then, for each λ > 0, :\Pr \left(\max_ , S_k , \geq\lambda\right)\leq \frac \operatorname _n\equiv \frac\sum_^n \operatorname _k\frac\sum_^\text _k^2 where ''S''''k'' = ''X''1 + ... + ''X''''k''. The convenience of this result is that we can bound the worst case deviation of a
random walk In mathematics, a random walk, sometimes known as a drunkard's walk, is a stochastic process that describes a path that consists of a succession of random steps on some Space (mathematics), mathematical space. An elementary example of a rand ...
at any point of time using its value at the end of time interval.


Proof

The following argument employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence S_1, S_2, \dots, S_n is a martingale. Define (Z_i)_^n as follows. Let Z_0 = 0, and :Z_ = \left\{ \begin{array}{ll} S_{i+1} & \text{ if } \displaystyle \max_{1 \leq j \leq i} , S_j , < \lambda \\ Z_i & \text{ otherwise} \end{array} \right. for all i. Then (Z_i)_{i=0}^n is also a martingale. For any martingale M_i with M_0 = 0, we have that \begin{align} \sum_{i=1}^n \text{E}
(M_i - M_{i-1})^2 M, or m, is the thirteenth letter of the Latin alphabet, used in the modern English alphabet, the alphabets of several western European languages and others worldwide. Its name in English is ''em'' (pronounced ), plural ''ems''. Histor ...
&= \sum_{i=1}^n \text{E} M_i^2 - 2 M_i M_{i-1} + M_{i-1}^2 \\ &= \sum_{i=1}^n \text{E}\left M_i^2 - 2 (M_{i-1} + M_{i} - M_{i-1}) M_{i-1} + M_{i-1}^2 \right\\ &= \sum_{i=1}^n \text{E}\left M_i^2 - M_{i-1}^2 \right- 2\text{E}\left M_{i-1} (M_{i}-M_{i-1})\right\ &= \text{E} _n^2- \text{E} _0^2= \text{E} _n^2 \end{align} Applying this result to the martingale (S_i)_{i=0}^n, we have \begin{align} \text{Pr}\left( \max_{1 \leq i \leq n} , S_i, \geq \lambda\right) &= \text{Pr} _n^2=\frac{1}{\lambda^2} \sum_{i=1}^n \text{E} Z_i - Z_{i-1})^2\\ &\leq \frac{1}{\lambda^2} \sum_{i=1}^n \text{E} S_i - S_{i-1})^2=\frac{1}{\lambda^2} \text{E} _n^2= \frac{1}{\lambda^2} \text{Var} _n\end{align} where the first inequality follows by Chebyshev's inequality. This inequality was generalized by Hájek and Rényi in 1955.


See also

* Chebyshev's inequality * Etemadi's inequality * Landau–Kolmogorov inequality *
Markov's inequality In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive Constant (mathematics), constant. Markov's inequality is tight in the sense that for e ...
* Bernstein inequalities (probability theory)


References

* (Theorem 22.4) * * {{PlanetMath attribution, id=3687, title=Kolmogorov's inequality Stochastic processes Probabilistic inequalities Articles containing proofs>Z_n, \geq \lambda\\ &\leq \frac{1}{\lambda^2} \text{E} _n^2=\frac{1}{\lambda^2} \sum_{i=1}^n \text{E} Z_i - Z_{i-1})^2\\ &\leq \frac{1}{\lambda^2} \sum_{i=1}^n \text{E} S_i - S_{i-1})^2=\frac{1}{\lambda^2} \text{E} _n^2= \frac{1}{\lambda^2} \text{Var} _n\end{align} where the first inequality follows by Chebyshev's inequality. This inequality was generalized by Hájek and Rényi in 1955.


See also

* Chebyshev's inequality * Etemadi's inequality * Landau–Kolmogorov inequality *
Markov's inequality In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive Constant (mathematics), constant. Markov's inequality is tight in the sense that for e ...
* Bernstein inequalities (probability theory)


References

* (Theorem 22.4) * * {{PlanetMath attribution, id=3687, title=Kolmogorov's inequality Stochastic processes Probabilistic inequalities Articles containing proofs