In
probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from
Kolmogorov's inequality In probability theory, Kolmogorov's inequality is a so-called "maximal inequality (mathematics), inequality" that gives a bound on the probability that the partial sums of a Finite set, finite collection of independent random variables exceed some s ...
and is used in one proof of the
strong law of large numbers.
Statement of the theorem
Let
be
independent random variables with
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
s
and
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
s
, such that
converges in ℝ and
converges in ℝ. Then
converges in ℝ
almost surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
.
Proof
Assume
WLOG
''Without loss of generality'' (often abbreviated to WOLOG, WLOG or w.l.o.g.; less commonly stated as ''without any loss of generality'' or ''with no loss of generality'') is a frequently used expression in mathematics. The term is used to indicate ...
. Set
, and we will see that
with probability 1.
For every
,
Thus, for every
and
,
While the second inequality is due to
Kolmogorov's inequality In probability theory, Kolmogorov's inequality is a so-called "maximal inequality (mathematics), inequality" that gives a bound on the probability that the partial sums of a Finite set, finite collection of independent random variables exceed some s ...
.
By the assumption that
converges, it follows that the last term tends to 0 when
, for every arbitrary
.
References
{{reflist
* Durrett, Rick. ''Probability: Theory and Examples.'' Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60–69.
* M. Loève, ''Probability theory'', Princeton Univ. Press (1963) pp. Sect. 16.3
* W. Feller, ''An introduction to probability theory and its applications'', 2, Wiley (1971) pp. Sect. IX.9
Probability theorems