Kolmogorov's Two-series Theorem
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from
Kolmogorov's inequality In probability theory, Kolmogorov's inequality is a so-called "maximal inequality (mathematics), inequality" that gives a bound on the probability that the partial sums of a Finite set, finite collection of independent random variables exceed some s ...
and is used in one proof of the strong law of large numbers.


Statement of the theorem

Let \left( X_n \right)_^ be independent random variables with
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
s \mathbf \left X_n \right= \mu_n and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
s \mathbf \left( X_n \right) = \sigma_n^2, such that \sum_^ \mu_n converges in ℝ and \sum_^ \sigma_n^2 converges in ℝ. Then \sum_^ X_n converges in ℝ
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
.


Proof

Assume
WLOG ''Without loss of generality'' (often abbreviated to WOLOG, WLOG or w.l.o.g.; less commonly stated as ''without any loss of generality'' or ''with no loss of generality'') is a frequently used expression in mathematics. The term is used to indicate ...
\mu_n = 0. Set S_N = \sum_^N X_n, and we will see that \limsup_N S_N - \liminf_NS_N = 0 with probability 1. For every m \in \mathbb, \limsup_ S_N - \liminf_ S_N = \limsup_ \left( S_N - S_m \right) - \liminf_ \left( S_N - S_m \right) \leq 2 \max_ \left, \sum_^ X_ \ Thus, for every m \in \mathbb and \epsilon > 0, \begin \mathbb \left( \limsup_ \left( S_N - S_m \right) - \liminf_ \left( S_N - S_m \right) \geq \epsilon \right) &\leq \mathbb \left( 2 \max_ \left, \sum_^ X_ \ \geq \epsilon \ \right) \\ &= \mathbb \left( \max_ \left, \sum_^ X_ \ \geq \frac \ \right) \\ &\leq \limsup_ 4\epsilon^ \sum_^ \sigma_i^2 \\ &= 4\epsilon^ \lim_ \sum_^ \sigma_i^2 \end While the second inequality is due to
Kolmogorov's inequality In probability theory, Kolmogorov's inequality is a so-called "maximal inequality (mathematics), inequality" that gives a bound on the probability that the partial sums of a Finite set, finite collection of independent random variables exceed some s ...
. By the assumption that \sum_^ \sigma_n^2 converges, it follows that the last term tends to 0 when m \to \infty, for every arbitrary \epsilon > 0.


References

{{reflist * Durrett, Rick. ''Probability: Theory and Examples.'' Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60–69. * M. Loève, ''Probability theory'', Princeton Univ. Press (1963) pp. Sect. 16.3 * W. Feller, ''An introduction to probability theory and its applications'', 2, Wiley (1971) pp. Sect. IX.9 Probability theorems