HOME
*





Gauss' Inequality
In probability theory, Gauss's inequality (or the Gauss inequality) gives an upper bound on the probability that a unimodal random variable lies more than any given distance from its mode. Let ''X'' be a unimodal random variable with mode ''m'', and let ''τ'' 2 be the expected value of (''X'' − ''m'')2. (''τ'' 2 can also be expressed as (''μ'' − ''m'')2 + ''σ'' 2, where ''μ'' and ''σ'' are the mean and standard deviation of ''X''.) Then for any positive value of ''k'', : \Pr(, X - m, > k) \leq \begin \left( \frac \right)^2 & \text k \geq \frac \\ pt1 - \frac & \text 0 \leq k \leq \frac. \end The theorem was first proved by Carl Friedrich Gauss in 1823. Extensions to higher-order moments Winkler in 1866 extended Gauss' inequality to ''r''th moments Winkler A. (1886) Math-Natur theorie Kl. Akad. Wiss Wien Zweite Abt 53, 6–41 where ''r'' > 0 and the distribution is unimodal with a mode ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gaussian Correlation Inequality
The Gaussian correlation inequality (GCI), formerly known as the Gaussian correlation conjecture (GCC), is a mathematical theorem in the fields of mathematical statistics and convex geometry. The statement The Gaussian correlation inequality states: Let \mu be an ''n''-dimensional Gaussian probability measure on \mathbb^n , i.e. \mu a multivariate normal distribution, centered at the origin. Then for all convex sets E,F \subset \mathbb^n that are symmetric about the origin, : \mu(E \cap F) \geq \mu(E) \cdot \mu(F). As a simple example for ''n''=2, one can think of darts being thrown at a board, with their landing spots in the plane distributed according to a 2-variable normal distribution centered at the origin. (This is a reasonable assumption for any given darts player, with different players being described by different normal distributions.) If we now consider a circle and a rectangle in the plane, both centered at the origin, then the proportion of the darts landing i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gaussian Isoperimetric Inequality
In mathematics, the Gaussian isoperimetric inequality, proved by Boris Tsirelson and Vladimir Sudakov, and later independently by Christer Borell, states that among all sets of given Gaussian measure in the ''n''-dimensional Euclidean space, half-spaces have the minimal Gaussian boundary measure. Mathematical formulation Let \scriptstyle A be a measurable subset of \scriptstyle\mathbf^n endowed with the standard Gaussian measure \gamma^n with the density /(2\pi)^. Denote by : A_\varepsilon = \left\ the ε-extension of ''A''. Then the ''Gaussian isoperimetric inequality'' states that : \liminf_ \varepsilon^ \left\ \geq \varphi(\Phi^(\gamma^n(A))), where : \varphi(t) = \frac\quad\quad\Phi(t) = \int_^t \varphi(s)\, ds. Proofs and generalizations The original proofs by Sudakov, Tsirelson and Borell were based on Paul Lévy's spherical isoperimetric inequality. Sergey Bobkov proved a functional generalization of the Gaussian isoperimetric inequality, from a cert ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Unimodal
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. Unimodal probability distribution In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's ''t''-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Random Variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a sample space (e.g., the set \) to a measurable space, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mode (statistics)
The mode is the value that appears most often in a set of data values. If is a discrete random variable, the mode is the value (i.e, ) at which the probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled. Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions. The mode is not necessarily unique to a given discrete distribution, since the probability mass function may take the same maximum value at several points , , etc. The most extreme case occurs in uniform distributions, where all values occur equally frequently. When the probability density function of a continuous distribution has multiple local maxima it is common to refer to all of the local ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Expected Value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with also often stylized as or \mathbb. History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes ''in a fair way'' between two players, who have to end th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Standard Deviation
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. Standard deviation may be abbreviated SD, and is most commonly represented in mathematical texts and equations by the lower case Greek letter σ (sigma), for the population standard deviation, or the Latin letter '' s'', for the sample standard deviation. The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation. A useful property of the standard deviation is that, unlike the variance, it is expressed in the same unit as the data. The standard deviation of a popu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Carl Friedrich Gauss
Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and science. Sometimes referred to as the ''Princeps mathematicorum'' () and "the greatest mathematician since antiquity", Gauss had an exceptional influence in many fields of mathematics and science, and he is ranked among history's most influential mathematicians. Also available at Retrieved 23 February 2014. Comprehensive biographical article. Biography Early years Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick (Braunschweig), in the Duchy of Brunswick-Wolfenbüttel (now part of Lower Saxony, Germany), to poor, working-class parents. His mother was illiterate and never recorded the date of his birth, remembering only that he had been born on a Wednesday, eight days before the Feast of the Ascension (which occurs 39 days after Easter). Ga ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gauss' Inequality
In probability theory, Gauss's inequality (or the Gauss inequality) gives an upper bound on the probability that a unimodal random variable lies more than any given distance from its mode. Let ''X'' be a unimodal random variable with mode ''m'', and let ''τ'' 2 be the expected value of (''X'' − ''m'')2. (''τ'' 2 can also be expressed as (''μ'' − ''m'')2 + ''σ'' 2, where ''μ'' and ''σ'' are the mean and standard deviation of ''X''.) Then for any positive value of ''k'', : \Pr(, X - m, > k) \leq \begin \left( \frac \right)^2 & \text k \geq \frac \\ pt1 - \frac & \text 0 \leq k \leq \frac. \end The theorem was first proved by Carl Friedrich Gauss in 1823. Extensions to higher-order moments Winkler in 1866 extended Gauss' inequality to ''r''th moments Winkler A. (1886) Math-Natur theorie Kl. Akad. Wiss Wien Zweite Abt 53, 6–41 where ''r'' > 0 and the distribution is unimodal with a mode ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Chebyshev's Inequality
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/''k''2 of the distribution's values can be ''k'' or more standard deviations away from the mean (or equivalently, at least 1 − 1/''k''2 of the distribution's values are less than ''k'' standard deviations away from the mean). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers. Its practical usage is similar to the 68–95–99.7 rule, which applies only to normal distributions. Chebyshev's inequality is more general, stating th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]