Cantor Distribution
   HOME
*



picture info

Cantor Distribution
The Cantor distribution is the probability distribution whose cumulative distribution function is the Cantor function. This distribution has neither a probability density function nor a probability mass function, since although its cumulative distribution function is a continuous function, the distribution is not absolutely continuous with respect to Lebesgue measure, nor does it have any point-masses. It is thus neither a discrete nor an absolutely continuous probability distribution, nor is it a mixture of these. Rather it is an example of a singular distribution. Its cumulative distribution function is continuous everywhere but horizontal almost everywhere, so is sometimes referred to as the Devil's staircase, although that term has a more general meaning. Characterization The support of the Cantor distribution is the Cantor set, itself the intersection of the (countably infinitely many) sets: : \begin C_0 = & ,1\\ pt C_1 = & ,1/3cup /3,1\\ pt C_2 = & , ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Support (mathematics)
In mathematics, the support of a real-valued function f is the subset of the function domain containing the elements which are not mapped to zero. If the domain of f is a topological space, then the support of f is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used very widely in mathematical analysis. Formulation Suppose that f : X \to \R is a real-valued function whose domain is an arbitrary set X. The of f, written \operatorname(f), is the set of points in X where f is non-zero: \operatorname(f) = \. The support of f is the smallest subset of X with the property that f is zero on the subset's complement. If f(x) = 0 for all but a finite number of points x \in X, then f is said to have . If the set X has an additional structure (for example, a topology), then the support of f is defined in an analogous way as the smallest subset of X of an appropriate type such that f vanishes in an appropriate sense on its complement. T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cumulant
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa. The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the -th-order cumulant of their sum is equal to the sum of their -th-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property. Just as for moments, where ''joint moments'' are used for collections of random variab ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bernoulli Number
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of ''m''-th powers of the first ''n'' positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function. The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by B^_n and B^_n; they differ only for , where B^_1=-1/2 and B^_1=+1/2. For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials B_n(x), with B^_n=B_n(0) and B^+_n=B_n(1). The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and indepe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cumulants
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa. The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the -th-order cumulant of their sum is equal to the sum of their -th-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property. Just as for moments, where ''joint moments'' are used for collections of random variab ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Central Moment
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterized. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location. Sets of central moments can be defined for both univariate and multivariate distributions. Univariate moments The ''n''th moment about the mean (or ''n''th central moment) of a real-valued random variable ''X'' is the quantity ''μ''''n'' := E 'X''.html"_;"title="''X'' − E[''X''">''X'' − E[''X''''n'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by \sigma^2, s^2, \operatorname(X), V(X), or \mathbb(X). An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Law Of Total Variance
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if X and Y are random variables on the same probability space, and the variance of Y is finite, then \operatorname(Y) = \operatorname operatorname(Y \mid X)+ \operatorname(\operatorname \mid X. In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM). These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation". Formulation There is a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Expected Value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with also often stylized as or \mathbb. History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes ''in a fair way'' between two players, who have to end th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Random Variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a sample space (e.g., the set \) to a measurable space, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Singular Function
In mathematics, a real-valued function ''f'' on the interval 'a'', ''b''is said to be singular if it has the following properties: *''f'' is continuous on 'a'', ''b'' (**) *there exists a set ''N'' of measure 0 such that for all ''x'' outside of ''N'' the derivative ''f'' ′(''x'') exists and is zero, that is, the derivative of ''f'' vanishes almost everywhere. *''f'' is non-constant on 'a'', ''b'' A standard example of a singular function is the Cantor function, which is sometimes called the devil's staircase (a term also used for singular functions in general). There are, however, other functions that have been given that name. One is defined in terms of the circle map. If ''f''(''x'') = 0 for all ''x'' ≤ ''a'' and ''f''(''x'') = 1 for all ''x'' ≥ ''b'', then the function can be taken to represent a cumulative distribution function for a random variable which is neither a discrete random variable (since the probability is zero for each point) nor an absolutel ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cantor Set
In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and introduced by German mathematician Georg Cantor in 1883. Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned the ternary construction only in passing, as an example of a more general idea, that of a perfect set that is nowhere dense. More generally, in topology, ''a'' Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). By a theorem of Brouwer, this is equivalent to being perfect nonempty, compact metrizable and zero dimensional. Construction and formula of the ternary set The Cantor tern ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]