Multidimensional Chebyshev's Inequality
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
, the multidimensional Chebyshev's inequality is a generalization of
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from th ...
, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount. Let X be an N-dimensional
random vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value ...
with expected value \mu=\operatorname and covariance matrix : V=\operatorname X - \mu) (X - \mu)^T \, If V is a
positive-definite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a ...
, for any
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
t>0: : \Pr \left( \sqrt > t\right) \le \frac N


Proof

Since V is positive-definite, so is V^. Define the random variable : y = (X-\mu)^T V^ (X-\mu). Since y is positive,
Markov's inequality In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, ...
holds: : \Pr\left( \sqrt > t\right) = \Pr( \sqrt > t) = \Pr(y > t^2) \le \frac. Finally, :\begin \operatorname &= \operatorname X-\mu)^T V^ (X-\mu)\ pt&=\operatorname \operatorname ( V^ (X-\mu) (X-\mu)^T )\ pt&= \operatorname ( V^ V ) = N \end.


Infinite dimensions

There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings. Let be a random variable which takes values in a
Fréchet space In functional analysis and related areas of mathematics, Fréchet spaces, named after Maurice Fréchet, are special topological vector spaces. They are generalizations of Banach spaces ( normed vector spaces that are complete with respect to th ...
\mathcal X (equipped with seminorms ). This includes most common settings of vector-valued random variables, e.g., when \mathcal X is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above. Suppose that is of " strong order two", meaning that : \operatorname\left(\, X\, _\alpha^2 \right) < \infty for every seminorm . This is a generalization of the requirement that have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.Vakhania, Nikolai Nikolaevich. Probability distributions on linear spaces. New York: North Holland, 1981. Let \mu \in \mathcal X be the Pettis integral of (i.e., the vector generalization of the mean), and let :\sigma_a := \sqrt be the standard deviation with respect to the seminorm . In this setting we can state the following: :General version of Chebyshev's inequality. \forall k > 0: \quad \Pr\left( \, X - \mu\, _\alpha \ge k \sigma_\alpha \right) \le \frac. Proof. The proof is straightforward, and essentially the same as the finitary version. If , then is constant (and equal to ) almost surely, so the inequality is trivial. If :\, X - \mu\, _\alpha \ge k \sigma_\alpha^2 then , so we may safely divide by . The crucial trick in Chebyshev's inequality is to recognize that 1 = \tfrac. The following calculations complete the proof: :\begin \Pr\left( \, X - \mu\, _\alpha \ge k \sigma_\alpha \right) &= \int_\Omega \mathbf_ \, \mathrm d\Pr \\ & = \int_\Omega \left ( \frac \right ) \cdot \mathbf_ \, \mathrm d\Pr \\ pt&\le \int_\Omega \left (\frac \right ) \cdot \mathbf_ \, \mathrm d\Pr \\ pt&\le \frac \int_\Omega \, X - \mu\, _\alpha^2 \, \mathrm d\Pr && \mathbf_ \le 1\\ pt&= \frac \left (\operatorname\, X - \mu\, _\alpha^2 \right )\\ pt&= \frac \left (\sigma_\alpha^2 \right )\\ pt&= \frac \end


References

{{Reflist Probabilistic inequalities Statistical inequalities