HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the cumulants of a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa. The first cumulant is the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set. For a data set, the '' ar ...
, the second cumulant is the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
, and the third cumulant is the same as the third
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the -th-order cumulant of their sum is equal to the sum of their -th-order cumulants. As well, the third and higher-order cumulants of a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
are zero, and it is the only distribution with this property. Just as for moments, where ''joint moments'' are used for collections of random variables, it is possible to define ''joint cumulants''.


Definition

The cumulants of a random variable are defined using the cumulant-generating function , which is the
natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
of the moment-generating function: :K(t)=\log\operatorname\left ^\right The cumulants are obtained from a power series expansion of the cumulant generating function: :K(t)=\sum_^\infty \kappa_ \frac =\kappa_1 \frac + \kappa_2 \frac+ \kappa_3 \frac+ \cdots = \mu t + \sigma^2 \frac + \cdots. This expansion is a
Maclaurin series Maclaurin or MacLaurin is a surname. Notable people with the surname include: * Colin Maclaurin (1698–1746), Scottish mathematician * Normand MacLaurin (1835–1914), Australian politician and university administrator * Henry Normand MacLaurin ...
, so the -th cumulant can be obtained by differentiating the above expansion times and evaluating the result at zero: : \kappa_ = K^(0). If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later.


Alternative definition of the cumulant generating function

Some writers prefer to define the cumulant-generating function as the natural logarithm of the
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
, which is sometimes also called the ''second'' characteristic function, :H(t)=\log\operatorname \left ^\right\sum_^\infty \kappa_n \frac=\mu it - \sigma^2 \frac + \cdots An advantage of —in some sense the function evaluated for purely imaginary arguments—is that is well defined for all real values of even when is not well defined for all real values of , such as can occur when there is "too much" probability that has a large magnitude. Although the function will be well defined, it will nonetheless mimic in terms of the length of its
Maclaurin series Maclaurin or MacLaurin is a surname. Notable people with the surname include: * Colin Maclaurin (1698–1746), Scottish mathematician * Normand MacLaurin (1835–1914), Australian politician and university administrator * Henry Normand MacLaurin ...
, which may not extend beyond (or, rarely, even to) linear order in the argument , and in particular the number of cumulants that are well defined will not change. Nevertheless, even when does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
(also called the Lorentzian) and more generally,
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be sta ...
s (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms.


Some basic properties

The n-th cumulant \kappa_n(X) of (the distribution of) a random variable X enjoys the following properties: * If n>1 and c is constant (i.e. not random) then \kappa_n(X+c) = \kappa_n(X), i.e. the cumulant is translation-invariant. (If n=1 then we have \kappa_1(X+c) = \kappa_1(X)+c.) * If c is constant (i.e. not random) then \kappa_n(cX) = c^n\kappa_n(X), i.e. the n-th cumulant is homogeneous of degree n. * If random variables X_1,\ldots,X_m are independent then :: \kappa_n(X_1+\cdots+X_m) = \kappa_n(X_1) + \cdots + \kappa_n(X_m) : i.e. the cumulant is cumulative– hence the name. The cumulative property follows quickly by considering the cumulant-generating function: :\begin K_(t) & =\log\operatorname \left ^\right\\ pt& = \log \left(\operatorname \left ^\right\cdots \operatorname \left e^ \right\right) \\ pt& = \log\operatorname\left ^\right+ \cdots + \log \operatorname \left e^ \right\\ pt&= K_(t) + \cdots + K_(t), \end so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant. A distribution with given cumulants can be approximated through an Edgeworth series.


The first several cumulants as functions of the moments

All of the higher cumulants are polynomial functions of the central moments, with integer coefficients, but only in degrees 2 and 3 are the cumulants actually central moments. * \kappa_1(X) = \operatorname E(X)= mean * \kappa_2(X) = \operatorname(X) = \operatorname E\big((X-\operatorname E(X))^2\big) =the variance, or second central moment. * \kappa_3(X) = \operatorname E\big((X-\operatorname E(X))^3\big)= the third central moment. * \kappa_4(X) = \operatorname E\big((X-\operatorname E(X))^4\big) - 3\left( \operatorname E\big((X-\operatorname E(X))^2\big) \right)^2= the fourth central moment minus three times the square of the second central moment. Thus this is the first case in which cumulants are not simply moments or central moments. The central moments of degree more than 3 lack the cumulative property. * \kappa_5(X) = \operatorname E\big((X-\operatorname E(X))^5\big) - 10\operatorname E\big((X-\operatorname E(X))^3\big) \operatorname E\big((X-\operatorname E(X))^2\big).


Cumulants of some discrete probability distributions

* The constant random variables . The cumulant generating function is . The first cumulant is and the other cumulants are zero, . * The
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
s, (number of successes in one trial with probability of success). The cumulant generating function is . The first cumulants are and . The cumulants satisfy a recursion formula :: \kappa_=p (1-p) \frac. * The
geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number ''X'' of Bernoulli trials needed to get one success, supported on the set \; * ...
s, (number of failures before one success with probability of success on each trial). The cumulant generating function is . The first cumulants are , and . Substituting gives and . * The
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
s. The cumulant generating function is . All cumulants are equal to the parameter: . * The
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
s, (number of successes in
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
trials with probability of success on each trial). The special case is a Bernoulli distribution. Every cumulant is just times the corresponding cumulant of the corresponding Bernoulli distribution. The cumulant generating function is . The first cumulants are and . Substituting gives and . The limiting case is a Poisson distribution. * The
negative binomial distribution In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non- ...
s, (number of failures before successes with probability of success on each trial). The special case is a geometric distribution. Every cumulant is just times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is . The first cumulants are , and . Substituting gives and . Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case is a Poisson distribution. Introducing the
variance-to-mean ratio In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a normalized measure of the dispersion of a pro ...
: \varepsilon=\mu^\sigma^2=\kappa_1^\kappa_2, the above probability distributions get a unified formula for the derivative of the cumulant generating function: : K'(t)=\mu\cdot(1+\varepsilon\cdot (e^-1))^. The second derivative is : K''(t)=\mu\varepsilon e^t(\varepsilon-(\varepsilon-1)e^t)^ confirming that the first cumulant is and the second cumulant is . The constant random variables have . The binomial distributions have so that . The Poisson distributions have . The negative binomial distributions have so that . Note the analogy to the classification of
conic sections In mathematics, a conic section, quadratic curve or conic is a curve obtained as the intersection of the surface of a cone with a plane. The three types of conic section are the hyperbola, the parabola, and the ellipse; the circle is a spe ...
by eccentricity: circles , ellipses , parabolas , hyperbolas .


Cumulants of some continuous probability distributions

* For the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
with
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
, the cumulant generating function is . The first and second derivatives of the cumulant generating function are and . The cumulants are , , and . The special case is a constant random variable . * The cumulants of the
uniform distribution Uniform distribution may refer to: * Continuous uniform distribution * Discrete uniform distribution * Uniform distribution (ecology) * Equidistributed sequence See also * * Homogeneous distribution In mathematics, a homogeneous distribution ...
on the interval are , where is the th
Bernoulli number In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, ...
. * The cumulants of the
exponential distribution In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average ...
with parameter are .


Some properties of the cumulant generating function

The cumulant generating function , if it exists, is infinitely differentiable and
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytop ...
, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the
supremum In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set P is a greatest element in P that is less than or equal to each element of S, if such an element exists. Consequently, the term ''greatest ...
of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the
degenerate distribution In mathematics, a degenerate distribution is, according to some, a probability distribution in a space with support only on a manifold of lower dimension, and according to others a distribution with support only at a single point. By the latter d ...
of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an
exponential decay A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where is the quantity and (lambda) is a positive rate ...
, that is, (''see
Big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund L ...
'') : \begin & \exists c>0,\,\, F(x)=O(e^), x\to-\infty; \text \\ pt& \exists d>0,\,\, 1-F(x)=O(e^),x\to+\infty; \end where F is the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Eve ...
. The cumulant-generating function will have vertical asymptote(s) at the negative
supremum In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set P is a greatest element in P that is less than or equal to each element of S, if such an element exists. Consequently, the term ''greatest ...
of such , if such a supremum exists, and at the
supremum In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set P is a greatest element in P that is less than or equal to each element of S, if such an element exists. Consequently, the term ''greatest ...
of such , if such a supremum exists, otherwise it will be defined for all real numbers. If the
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
of a random variable has finite upper or lower bounds, then its cumulant-generating function , if it exists, approaches
asymptote In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the ''x'' or ''y'' coordinates tends to infinity. In projective geometry and related context ...
(s) whose slope is equal to the supremum and/or infimum of the support, : \begin y & =(t+1)\inf \operatorname X-\mu(X), \text \\ pty & =(t-1)\sup \operatornameX+\mu(X), \end respectively, lying above both these lines everywhere. (The
integral In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
s :\int_^0 \left \inf \operatornameX-K'(t)\right,dt, \qquad \int_^0 \left \inf \operatornameX-K'(t) \right,dt yield the -intercepts of these asymptotes, since .) For a shift of the distribution by , K_(t)=K_X(t)+ct. For a degenerate point mass at , the cgf is the straight line K_c(t)=ct, and more generally, K_=K_X+K_Y if and only if and are independent and their cgfs exist; (
subindependence In probability theory and statistics, subindependence is a weak form of independence. Two random variables ''X'' and ''Y'' are said to be subindependent if the characteristic function of their sum is equal to the product of their marginal characte ...
and the existence of second moments sufficing to imply independence.) The
natural exponential family In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF). Definition Univariate case The natural exponential families (NEF) are a subset of ...
of a distribution may be realized by shifting or translating , and adjusting it vertically so that it always passes through the origin: if is the pdf with cgf K(t)=\log M(t), and f, \theta is its natural exponential family, then f(x\mid\theta)=\frac1e^ f(x), and K(t\mid\theta)=K(t+\theta)-K(\theta). If is finite for a range then if then is analytic and infinitely differentiable for . Moreover for real and is strictly convex, and is strictly increasing.


Further properties of cumulants


A negative result

Given the results for the cumulants of the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, it might be hoped to find families of distributions for which for some , with the lower-order cumulants (orders 3 to ) being non-zero. There are no such distributions. The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.


Cumulants and moments

The moment generating function is given by: : M(t) = 1+\sum_^\infty \frac = \exp \left(\sum_^\infty \frac\right) = \exp(K(t)). So the cumulant generating function is the logarithm of the moment generating function :K(t) = \log M(t). The first cumulant is the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
; the second and third cumulants are respectively the second and third
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s (the second central moment is the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments. The moments can be recovered in terms of cumulants by evaluating the -th derivative of \exp(K(t)) at , : \mu'_n = M^(0) = \left. \frac\_. Likewise, the cumulants can be recovered in terms of moments by evaluating the -th derivative of \log M(t) at , :\kappa_n = K^(0) = \left. \frac \_. The explicit expression for the -th moment in terms of the first cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have : \mu'_n = \sum_^n B_(\kappa_1,\ldots,\kappa_) : \kappa_n = \sum_^n (-1)^ (k-1)! B_(\mu'_1, \ldots, \mu'_), where B_ are incomplete (or partial)
Bell polynomials In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in the Faà di Bruno ...
. In the like manner, if the mean is given by \mu, the central moment generating function is given by : C(t) = \operatorname ^= e^ M(t) = \exp(K(t) - \mu t), and the -th central moment is obtained in terms of cumulants as : \mu_n = C^(0) = \left. \frac \exp (K(t) - \mu t) \_ = \sum_^n B_(0,\kappa_2,\ldots,\kappa_). Also, for , the -th cumulant in terms of the central moments is : \begin \kappa_n & = K^(0) = \left. \frac (\log C(t) + \mu t) \_ \\ pt& = \sum_^n (-1)^ (k-1)! B_(0,\mu_2,\ldots,\mu_). \end The -th
moment Moment or Moments may refer to: * Present time Music * The Moments, American R&B vocal group Albums * ''Moment'' (Dark Tranquillity album), 2020 * ''Moment'' (Speed album), 1998 * ''Moments'' (Darude album) * ''Moments'' (Christine Guldbrand ...
is an -th-degree polynomial in the first cumulants. The first few expressions are: : \begin \mu'_1 = & \kappa_1 \\ pt\mu'_2 = & \kappa_2+\kappa_1^2 \\ pt\mu'_3 = & \kappa_3+3\kappa_2\kappa_1+\kappa_1^3 \\ pt\mu'_4 = & \kappa_4 + 4\kappa_3\kappa_1 + 3\kappa_2^2 + 6\kappa_2\kappa_1^2 + \kappa_1^4 \\ pt\mu'_5 = & \kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2 + 10\kappa_3\kappa_1^2 + 15\kappa_2^2\kappa_1 + 10\kappa_2\kappa_1^3 + \kappa_1^5 \\ pt\mu'_6 = & \kappa_6 + 6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 15\kappa_4\kappa_1^2 + 10\kappa_3^2 + 60\kappa_3\kappa_2\kappa_1 + 20\kappa_3\kappa_1^3 \\ & + 15\kappa_2^3 + 45\kappa_2^2\kappa_1^2 + 15\kappa_2\kappa_1^4 + \kappa_1^6. \end The "prime" distinguishes the moments from the central moments . To express the ''central'' moments as functions of the cumulants, just drop from these polynomials all terms in which appears as a factor: : \begin \mu_1 & =0 \\ pt\mu_2 & =\kappa_2 \\ pt\mu_3 & =\kappa_3 \\ pt\mu_4 & =\kappa_4+3\kappa_2^2 \\ pt\mu_5 & =\kappa_5+10\kappa_3\kappa_2 \\ pt\mu_6 & =\kappa_6+15\kappa_4\kappa_2+10\kappa_3^2+15\kappa_2^3. \end Similarly, the -th cumulant is an -th-degree polynomial in the first non-central moments. The first few expressions are: : \begin \kappa_1 = & \mu'_1 \\ pt\kappa_2 = & \mu'_2-^2 \\ pt\kappa_3 = & \mu'_3-3\mu'_2\mu'_1+2^3 \\ pt\kappa_4 = & \mu'_4-4\mu'_3\mu'_1-3^2+12\mu'_2^2-6^4 \\ pt\kappa_5 = & \mu'_5-5\mu'_4\mu'_1-10\mu'_3\mu'_2 + 20\mu'_3^2 + 30^2\mu'_1-60\mu'_2^3 + 24^5 \\ pt\kappa_6 = & \mu'_6-6\mu'_5\mu'_1-15\mu'_4\mu'_2+30\mu'_4^2-10^2 + 120\mu'_3\mu'_2\mu'_1 \\ & - 120\mu'_3^3 + 30^3 - 270^2 ^2+360\mu'_2^4-120^6 \end To express the cumulants for as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor: :\kappa_2=\mu_2\, :\kappa_3=\mu_3\, :\kappa_4=\mu_4-3^2\, :\kappa_5=\mu_5-10\mu_3\mu_2\, :\kappa_6=\mu_6-15\mu_4\mu_2-10^2+30^3\,. To express the cumulants for as functions of the standardized central moments , also set in the polynomials: :\kappa_3=\mu''_3\, :\kappa_4=\mu''_4-3\, :\kappa_5=\mu''_5-10\mu''_3\, :\kappa_6=\mu''_6-15\mu''_4-10^2+30\,. The cumulants can be related to the moments by differentiating the relationship with respect to , giving , which conveniently contains no exponentials or logarithms. Equating the coefficient of on the left and right sides and using gives the following formulas for : : \begin \mu'_1 = & \kappa_1 \\ pt\mu'_2 = & \kappa_1\mu'_1+\kappa_2 \\ pt\mu'_3 = & \kappa_1\mu'_2+2\kappa_2\mu'_1+\kappa_3 \\ pt\mu'_4 = & \kappa_1\mu'_3+3\kappa_2\mu'_2+3\kappa_3\mu'_1+\kappa_4 \\ pt\mu'_5 = & \kappa_1\mu'_4+4\kappa_2\mu'_3+6\kappa_3\mu'_2+4\kappa_4\mu'_1+\kappa_5 \\ pt\mu'_6 = & \kappa_1\mu'_5+5\kappa_2\mu'_4+10\kappa_3\mu'_3+10\kappa_4\mu'_2+5\kappa_5\mu'_1+\kappa_6 \\ pt\mu'_n = & \sum_^\kappa_m \mu'_ + \kappa_n \end These allow either \kappa_n or \mu'_n to be computed from the other using knowledge of the lower-order cumulants and moments. The corresponding formulas for the central moments \mu_n for n \ge 2 are formed from these formulas by setting \mu'_1 = \kappa_1 = 0 and replacing each \mu'_n with \mu_n for n \ge 2.


Cumulants and set-partitions

These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is :\mu'_n=\sum_ \prod_ \kappa_ where * runs through the list of all partitions of a set of size ; *"" means is one of the "blocks" into which the set is partitioned; and * is the size of the set . Thus each
monomial In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called power product, is a product of powers of variables with nonnegative integer expon ...
is a constant times a product of cumulants in which the sum of the indices is (e.g., in the term , the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
corresponds to each term. The ''coefficient'' in each term is the number of partitions of a set of members that collapse to that partition of the integer when the members of the set become indistinguishable.


Cumulants and combinatorics

Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota, where links to invariant theory, symmetric functions, and binomial sequences are studied via
umbral calculus In mathematics before the 1970s, the term umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain "shadowy" techniques used to "prove" them. These techniques were introduced by John Blis ...
.


Joint cumulants

The joint cumulant of several random variables is defined by a similar cumulant generating function :K(t_1,t_2,\dots,t_n)=\log E(\mathrm e^). A consequence is that :\kappa(X_1,\dots,X_n) =\sum_\pi (, \pi, -1)!(-1)^\prod_E\left(\prod_X_i\right) where runs through the list of all partitions of ,  runs through the list of all blocks of the partition , and is the number of parts in the partition. For example, :\kappa(X,Y)=\operatorname E(XY) - \operatorname E(X) \operatorname E(Y), is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
, and :\kappa(X,Y,Z)=\operatorname E(XYZ) - \operatorname E(XY) \operatorname E(Z) - \operatorname E(XZ) \operatorname E(Y) - \operatorname E(YZ) \operatorname E(X) + 2\operatorname E(X)\operatorname E(Y)\operatorname E(Z).\, If any of these random variables are identical, e.g. if , then the same formulae apply, e.g. :\kappa(X,X,Z)=\operatorname E(X^2Z) -2\operatorname E(XZ)\operatorname E(X) - \operatorname E(X^2)\operatorname E(Z) + 2\operatorname E(X)^2\operatorname E(Z),\, although for such repeated variables there are more concise formulae. For zero-mean random vectors, :\kappa(X,Y,Z) = \operatorname E(XYZ).\, :\kappa(X,Y,Z,W) = \operatorname E(XYZW) - \operatorname E(XY) \operatorname E(ZW) - \operatorname E(XZ) \operatorname E(YW) - \operatorname E(XW) \operatorname E(YZ).\, The joint cumulant of just one random variable is its expected value, and that of two random variables is their
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
. If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero. If all random variables are the same, then the joint cumulant is the -th ordinary cumulant. The combinatorial meaning of the expression of moments in terms of cumulants is easier to understand than that of cumulants in terms of moments: : \operatorname E(X_1\cdots X_n)=\sum_\pi\prod_\kappa(X_i : i \in B). For example: : \operatorname E(XYZ) = \kappa(X,Y,Z) + \kappa(X,Y)\kappa(Z) + \kappa(X,Z)\kappa(Y) + \kappa(Y,Z)\kappa(X) + \kappa(X)\kappa(Y)\kappa(Z).\, Another important property of joint cumulants is multilinearity: : \kappa(X+Y,Z_1,Z_2,\dots) = \kappa(X,Z_1,Z_2,\ldots) + \kappa(Y,Z_1,Z_2,\ldots).\, Just as the second cumulant is the variance, the joint cumulant of just two random variables is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
. The familiar identity : \operatorname(X+Y) = \operatorname(X) + 2\operatorname(X,Y) + \operatorname(Y)\, generalizes to cumulants: :\kappa_n(X+Y)=\sum_^n \kappa( \, \underbrace_j, \underbrace_\,).\,


Conditional cumulants and the law of total cumulance

The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case , expressed in the language of (central) moments rather than that of cumulants, says : \mu_3(X) = \operatorname E(\mu_3(X\mid Y)) + \mu_3(\operatorname E(X\mid Y)) + 3 \operatorname(\operatorname E(X\mid Y), \operatorname (X\mid Y)). In general, :\kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_\mid Y), \dots, \kappa(X_\mid Y)) where * the sum is over all
partitions Partition may refer to: Computing Hardware * Disk partitioning, the division of a hard disk drive * Memory partition, a subdivision of a computer's memory, usually for use by a single job Software * Partition (database), the division of ...
  of the set of indices, and * 1, ..., b are all of the "blocks" of the partition ; the expression indicates that the joint cumulant of the random variables whose indices are in that block of the partition.


Relation to statistical physics

In
statistical physics Statistical physics is a branch of physics that evolved from a foundation of statistical mechanics, which uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approxim ...
many
extensive quantities Physical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes. According to IUPAC, an intensive quantity is one ...
– that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants. A system in equilibrium with a thermal bath at temperature have a fluctuating internal energy , which can be considered a random variable drawn from a distribution E\sim p(E). The partition function of the system is :Z(\beta) = \langle\exp(-\beta E)\rangle,\, where =  and is Boltzmann's constant and the notation \langle A \rangle has been used rather than \operatorname /math> for the expectation value to avoid confusion with the energy, . Hence the first and second cumulant for the energy give the average energy and heat capacity. : \langle E \rangle_c = \frac = \langle E \rangle : \langle E^2 \rangle_c = \frac = k T^2 \frac = kT^2C The
Helmholtz free energy In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz en ...
expressed in terms of :F(\beta) = -\beta^\log Z(\beta) \, further connects thermodynamic quantities with cumulant generating function for the energy. Thermodynamics properties that are derivatives of the free energy, such as its
internal energy The internal energy of a thermodynamic system is the total energy contained within it. It is the energy necessary to create or prepare the system in its given internal state, and includes the contributions of potential energy and internal kinet ...
,
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodyna ...
, and
specific heat In thermodynamics, the specific heat capacity (symbol ) of a substance is the heat capacity of a sample of the substance divided by the mass of the sample, also sometimes referred to as massic heat capacity. Informally, it is the amount of heat t ...
capacity, all can be readily expressed in terms of these cumulants. Other free energy can be a function of other variables such as the magnetic field or chemical potential \mu, e.g. : \Omega=-\beta^\log(\langle \exp(-\beta E -\beta\mu N) \rangle),\, where is the number of particles and \Omega is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of and .


History

The history of cumulants is discussed by Anders Hald. Cumulants were first introduced by
Thorvald N. Thiele Thorvald Nicolai Thiele (24 December 1838 – 26 September 1910) was a Danish astronomer and director of the Copenhagen Observatory. He was also an actuary and mathematician, most notable for his work in statistics, interpolation and the three- ...
, in 1889, who called them ''semi-invariants''. They were first called ''cumulants'' in a 1932 paper by Ronald Fisher and John Wishart. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention. Stephen Stigler has said that the name ''cumulant'' was suggested to Fisher in a letter from
Harold Hotelling Harold Hotelling (; September 29, 1895 – December 26, 1973) was an American mathematical statistician and an influential economic theorist, known for Hotelling's law, Hotelling's lemma, and Hotelling's rule in economics, as well as Hotelling's ...
. In a paper published in 1929, Fisher had called them ''cumulative moment functions''. The partition function in statistical physics was introduced by
Josiah Willard Gibbs Josiah Willard Gibbs (; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in t ...
in 1901. The free energy is often called Gibbs free energy. In
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. It does not assume or postulate any natural laws, but explains the macroscopic b ...
, cumulants are also known as
Ursell function In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functi ...
s relating to a publication in 1927.


Cumulants in generalized settings


Formal cumulants

More generally, the cumulants of a sequence , not necessarily the moments of any probability distribution, are, by definition, : 1+\sum_^\infty \frac = \exp \left( \sum_^\infty \frac \right) , where the values of for are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.


Bell numbers

In
combinatorics Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many a ...
, the -th Bell number is the number of partitions of a set of size . All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.


Cumulants of a polynomial sequence of binomial type

For any sequence of
scalars Scalar may refer to: * Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers * Scalar (physics), a physical quantity that can be described by a single element of a number field such ...
in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence of formal moments, given by the polynomials above. For those polynomials, construct a
polynomial sequence In mathematics, a polynomial sequence is a sequence of polynomials indexed by the nonnegative integers 0, 1, 2, 3, ..., in which each index is equal to the degree of the corresponding polynomial. Polynomial sequences are a topic of interest in ...
in the following way. Out of the polynomial : \begin \mu'_6 = & \kappa_6 + 6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 15\kappa_4\kappa_1^2 + 10\kappa_3^2+60\kappa_3\kappa_2\kappa_1 + 20\kappa_3\kappa_1^3 \\ & + 15\kappa_2^3 + 45\kappa_2^2\kappa_1^2 + 15\kappa_2\kappa_1^4 + \kappa_1^6 \end make a new polynomial in these plus one additional variable : : \begin p_6(x) = & \kappa_6 \,x + (6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 10\kappa_3^2)\,x^2 + (15\kappa_4\kappa_1^2 + 60\kappa_3\kappa_2\kappa_1 + 15\kappa_2^3)\,x^3 \\ & + (45\kappa_2^2\kappa_1^2)\,x^4+(15\kappa_2\kappa_1^4)\,x^5 +(\kappa_1^6)\,x^6, \end and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on . Each coefficient is a polynomial in the cumulants; these are the
Bell polynomials In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in the Faà di Bruno ...
, named after Eric Temple Bell. This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.


Free cumulants

In the above moment-cumulant formula :\operatorname E(X_1\cdots X_n)=\sum_\pi\prod_\kappa(X_i : i\in B) for joint cumulants, one sums over ''all'' partitions of the set . If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the \kappa in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher and play a central role in free probability theory. In that theory, rather than considering
independence Independence is a condition of a person, nation, country, or state in which residents and population, or some portion thereof, exercise self-government, and usually sovereignty, over its territory. The opposite of independence is the stat ...
of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras. The ordinary cumulants of degree higher than 2 of the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
are zero. The ''free'' cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.


See also

* Entropic value at risk * Cumulant generating function from a multiset *
Cornish–Fisher expansion The Cornish–Fisher expansion is an asymptotic expansion used to approximate the quantiles of a probability distribution based on its cumulants. It is named after E. A. Cornish and R. A. Fisher, who first described the technique in 1937. De ...
*
Edgeworth expansion The Gram–Charlier A series (named in honor of Jørgen Pedersen Gram and Carl Charlier), and the Edgeworth series (named in honor of Francis Ysidro Edgeworth) are series that approximate a probability distribution in terms of its cumulants. The ser ...
* Polykay * k-statistic, a minimum-variance unbiased estimator of a cumulant *
Ursell function In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functi ...
* Total position spread tensor as an application of cumulants to analyse the electronic
wave function A wave function in quantum physics is a mathematical description of the quantum state of an isolated quantum system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements ...
in
quantum chemistry Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions ...
.


References


External links

*
cumulant
on th

{{Theory of probability distributions Moment (mathematics)