Stable Distribution
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
, a
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations * Probability distribution, the probability of a particular value or value range of a vari ...
is said to be stable if a linear combination of two
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s with this distribution has the same distribution,
up to Two Mathematical object, mathematical objects ''a'' and ''b'' are called equal up to an equivalence relation ''R'' * if ''a'' and ''b'' are related by ''R'', that is, * if ''aRb'' holds, that is, * if the equivalence classes of ''a'' and ''b'' wi ...
location In geography, location or place are used to denote a region (point, line, or area) on Earth's surface or elsewhere. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ...
and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.B. Mandelbrot, The Pareto–Lévy Law and the Distribution of Income, International Economic Review 1960 https://www.jstor.org/stable/2525289 Of the four parameters defining the family, most attention has been focused on the stability parameter, \alpha (see panel). Stable distributions have 0 < \alpha \leq 2, with the upper bound corresponding to the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, and \alpha=1 to the
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
. The distributions have undefined
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
for \alpha < 2, and undefined
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
for \alpha \leq 1}. The importance of stable probability distributions is that they are "
attractor In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain ...
s" for properly normed sums of independent and identically distributed (
iid In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
) random variables. The normal distribution defines a family of stable distributions. By the classical
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselv ...
the properly normed sum of a set of random variables, each with finite variance, will tend toward a normal distribution as the number of variables increases. Without the finite variance assumption, the limit may be a stable distribution that is not normal. Mandelbrot referred to such distributions as "stable Paretian distributions", after
Vilfredo Pareto Vilfredo Federico Damaso Pareto ( , , , ; born Wilfried Fritz Pareto; 15 July 1848 – 19 August 1923) was an Italian polymath (civil engineer, sociologist, economist, political scientist, and philosopher). He made several important contribut ...
. In particular, he referred to those maximally skewed in the positive direction with 1 < \alpha < 2 as "Pareto–Lévy distributions", which he regarded as better descriptions of stock and commodity prices than normal distributions.Mandelbrot, B., New methods in statistical economics
The Journal of Political Economy The ''Journal of Political Economy'' is a monthly peer-reviewed academic journal published by the University of Chicago Press. Established by James Laurence Laughlin in 1892, it covers both theoretical and empirical economics. In the past, the ...
, 71 #5, 421–440 (1963).


Definition

A non-
degenerate distribution In mathematics, a degenerate distribution is, according to some, a probability distribution in a space with support only on a manifold of lower dimension, and according to others a distribution with support only at a single point. By the latter ...
is a stable distribution if it satisfies the following property: Since the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, the
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
, and the
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is k ...
all have the above property, it follows that they are special cases of stable distributions. Such distributions form a four-parameter family of continuous
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
s parametrized by location and scale parameters ''μ'' and ''c'', respectively, and two shape parameters \beta and \alpha, roughly corresponding to measures of asymmetry and concentration, respectively (see the figures). The
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
\varphi(t) of any probability distribution is just the
Fourier transform A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, ...
of its probability density function f(x) . The density function is therefore the inverse Fourier transform of the characteristic function: f(x) = \frac\int_^\infty \varphi(t)e^\,dt. Although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be expressed analytically. A random variable ''X'' is called stable if its characteristic function can be written as \varphi(t; \alpha, \beta, c, \mu) = \exp \left ( i t \mu - , c t, ^\alpha \left ( 1 - i \beta \sgn(t) \Phi \right ) \right ) where is just the
sign A sign is an object, quality, event, or entity whose presence or occurrence indicates the probable presence or occurrence of something else. A natural sign bears a causal relation to its object—for instance, thunder is a sign of storm, or me ...
of and \Phi = \begin \tan \left (\frac \right) & \alpha \neq 1 \\ - \frac\log, t, & \alpha = 1 \end ''μ'' ∈ R is a shift parameter, \beta \in 1,1/math>, called the ''skewness parameter'', is a measure of asymmetry. Notice that in this context the usual
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal d ...
is not well defined, as for \alpha < 2 the distribution does not admit 2nd or higher moments, and the usual skewness definition is the 3rd
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
. The reason this gives a stable distribution is that the characteristic function for the sum of two independent random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a stable distribution gives something with the same values of \alpha and \beta, but possibly different values of ''μ'' and ''c''. Not every function is the characteristic function of a legitimate probability distribution (that is, one whose
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
is real and goes from 0 to 1 without decreasing), but the characteristic functions given above will be legitimate so long as the parameters are in their ranges. The value of the characteristic function at some value ''t'' is the complex conjugate of its value at −''t'' as it should be so that the probability distribution function will be real. In the simplest case \beta = 0, the characteristic function is just a stretched exponential function; the distribution is symmetric about ''μ'' and is referred to as a (Lévy) symmetric alpha-stable distribution, often abbreviated ''SαS''. When \alpha < 1 and \beta = 1, the distribution is supported by [''μ'', ∞). The parameter ''c'' > 0 is a scale factor which is a measure of the width of the distribution while \alpha is the exponent or index of the distribution and specifies the asymptotic behavior of the distribution.


Parametrizations

The above definition is only one of the parametrizations in use for stable distributions; it is the most common but is not continuous in the parameters at \alpha =1. A continuous parametrization is \varphi(t; \alpha, \beta, \gamma, \delta) = \exp \left (i t \delta - , \gamma t, ^\alpha \left (1 - i \beta \sgn(t) \Phi \right ) \right ) where: \Phi = \begin \left ( , \gamma t, ^ - 1 \right ) \tan \left (\tfrac \right ) & \alpha \neq 1 \\ - \frac \log, \gamma t, & \alpha = 1 \end The ranges of \alpha and \beta are the same as before, ''γ'' (like ''c'') should be positive, and ''δ'' (like ''μ'') should be real. In either parametrization one can make a linear transformation of the random variable to get a random variable whose density is f(y; \alpha, \beta, 1, 0) . In the first parametrization, this is done by defining the new variable: y = \begin \frac\gamma & \alpha \neq 1 \\ \frac\gamma - \beta\frac 2\pi\ln\gamma & \alpha = 1 \end For the second parametrization, we simply use y = \frac\gamma no matter what \alpha is. In the first parametrization, if the mean exists (that is, \alpha > 1) then it is equal to ''μ'', whereas in the second parametrization when the mean exists it is equal to \delta - \beta \gamma \tan \left (\tfrac \right).


The distribution

A stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a smooth (infinitely differentiable) density function. If f(x; \alpha, \beta, c, \mu) denotes the density of ''X'' and ''Y'' is the sum of independent copies of ''X'': Y = \sum_^N k_i (X_i - \mu) then ''Y'' has the density \tfrac f(y / s; \alpha, \beta, c, 0) with s = \left(\sum_^N , k_i, ^\alpha \right )^ The asymptotic behavior is described, for \alpha < 2, by: f(x) \sim \frac \left (c^\alpha (1 + \sgn(x) \beta) \sin \left (\frac \right ) \frac \right ) where Γ is the Gamma function (except that when \alpha \geq 1 and \beta = \pm 1, the tail does not vanish to the left or right, resp., of ''μ'', although the above expression is 0). This "heavy tail" behavior causes the variance of stable distributions to be infinite for all \alpha <2. This property is illustrated in the log–log plots below. When \alpha = 2, the distribution is Gaussian (see below), with tails asymptotic to exp(−''x''2/4''c''2)/(2''c''√π).


One-sided stable distribution and stable count distribution

When \alpha < 1 and \beta = 1, the distribution is supported by [''μ'', ∞). This family is called one-sided stable distribution. Its standard distribution (μ=0) is defined as :L_\alpha(x) = f, where \alpha < 1. Let q = \exp(-i\alpha\pi/2), its characteristic function is \varphi(t;\alpha) = \exp\left (- q, t, ^\alpha \right ) . Thus the integral form of its PDF is (note: \operatorname(q)<0) \begin L_\alpha(x) & = \frac\Re\left[ \int_^\infty e^e^\,dt\right] \\ & = \frac \int_0^\infty e^ \sin(tx)\sin(-\operatorname(q)\,t^\alpha) \,dt, \text \\ & = \frac \int_0^\infty e^ \cos(tx)\cos(\operatorname(q)\,t^\alpha) \,dt . \end The double-sine integral is more effective for very small x. Consider the Lévy sum Y = \sum_^N X_i where X_i \sim L_\alpha(x), then ''Y'' has the density \frac L_\alpha where \nu = N^. Set x = 1, we arrive at the
stable count distribution In probability theory, the stable count distribution is the conjugate prior of a one-sided stable distribution. This distribution was discovered by Stephen Lihn (Chinese: 藺鴻圖) in his 2017 study of daily distributions of the S&P 500 and the ...
. Its standard distribution is defined as :\mathfrak_\alpha(\nu)=\frac \frac L_\alpha, where \nu > 0 and \alpha < 1. The stable count distribution is the
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and th ...
of the one-sided stable distribution. Its location-scale family is defined as :\mathfrak_\alpha(\nu;\nu_0,\theta) = \frac \frac L_\alpha, where \nu > \nu_0, \theta > 0, and \alpha < 1. It is also a one-sided distribution supported by [\nu_0,\infty). The location parameter \nu_0 is the cut-off location, while \theta defines its scale. When \alpha = \frac, L_(x) is the
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is k ...
which is an inverse gamma distribution. Thus \mathfrak_(\nu; \nu_0, \theta) is a shifted gamma distribution of shape 3/2 and scale 4\theta, :\mathfrak_(\nu;\nu_0,\theta) = \frac (\nu-\nu_0)^ e^, where \nu > \nu_0, \theta > 0. Its mean is \nu_0 + 6\theta and its standard deviation is \sqrt\theta. It is hypothesized that
VIX VIX is the ticker symbol and the popular name for the Chicago Board Options Exchange's CBOE Volatility Index, a popular measure of the stock market's expectation of volatility based on S&P 500 index options. It is calculated and disseminated ...
is distributed like \mathfrak_(\nu;\nu_0,\theta) with \nu_0 = 10.4 and \theta = 1.6 (See Section 7 of ). Thus the
stable count distribution In probability theory, the stable count distribution is the conjugate prior of a one-sided stable distribution. This distribution was discovered by Stephen Lihn (Chinese: 藺鴻圖) in his 2017 study of daily distributions of the S&P 500 and the ...
is the first-order marginal distribution of a volatility process. In this context, \nu_0 is called the "floor volatility". Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of ) :\int_0^\infty e^ L_\alpha(x) dx = e^, where \alpha<1. Let x = 1 / \nu, and one can decompose the integral on the left hand side as a
product distribution A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables ''X'' and ''Y'', the distribution of ...
of a standard Laplace distribution and a standard stable count distribution,f :\int_0^\infty \frac \left ( \frac e^\right ) \left (\frac \frac L_\alpha \right ) d \nu = \frac \frac e^, where \alpha<1. This is called the "lambda decomposition" (See Section 4 of ) since the right hand side was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "
exponential power distribution The generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To dis ...
", or the "generalized error/normal distribution", often referred to when \alpha > 1. The n-th moment of \mathfrak_\alpha(\nu) is the -(n + 1)-th moment of L_\alpha(x), All positive moments are finite. This in a way solves the thorny issue of diverging moments in the stable distribution.


Properties

* All stable distributions are
infinitely divisible Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter, ...
. * With the exception of the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
(\alpha = 2), stable distributions are leptokurtotic and
heavy-tailed distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distrib ...
s. * Closure under convolution Stable distributions are closed under convolution for a fixed value of \alpha. Since convolution is equivalent to multiplication of the Fourier-transformed function, it follows that the product of two stable characteristic functions with the same \alpha will yield another such characteristic function. The product of two stable characteristic functions is given by: \exp\left (it\mu_1+it\mu_2 - , c_1 t, ^\alpha - , c_2 t, ^\alpha +i\beta_1, c_1 t, ^\alpha\sgn(t)\Phi + i\beta_2, c_2 t, ^\alpha\sgn(t)\Phi \right ) Since is not a function of the ''μ'', ''c'' or \beta variables it follows that these parameters for the convolved function are given by: \begin \mu &=\mu_1+\mu_2 \\ , c, &= \left (, c_1, ^\alpha+, c_2, ^\alpha \right )^ \\ pt\beta &= \frac \end In each case, it can be shown that the resulting parameters lie within the required intervals for a stable distribution.


A generalized central limit theorem

Another important property of stable distributions is the role that they play in a generalized
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselv ...
. The central limit theorem states that the sum of a number of independent and identically distributed (i.i.d.) random variables with finite non-zero variances will tend to a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
as the number of variables grows. A generalization due to Gnedenko and
Kolmogorov Andrey Nikolaevich Kolmogorov ( rus, Андре́й Никола́евич Колмого́ров, p=ɐnˈdrʲej nʲɪkɐˈlajɪvʲɪtɕ kəlmɐˈɡorəf, a=Ru-Andrey Nikolaevich Kolmogorov.ogg, 25 April 1903 – 20 October 1987) was a Sovi ...
states that the sum of a number of random variables with symmetric distributions having power-law tails ( Paretian tails), decreasing as , x, ^ where 0 < \alpha \leqslant 2 (and therefore having infinite variance), will tend to a stable distribution f(x;\alpha,0,c,0) as the number of summands grows. If \alpha > 2 then the sum converges to a stable distribution with stability parameter equal to 2, i.e. a Gaussian distribution. There are other possibilities as well. For example, if the characteristic function of the random variable is asymptotic to 1+a, t, ^\alpha\ln, t, for small ''t'' (positive or negative), then we may ask how ''t'' varies with ''n'' when the value of the characteristic function for the sum of ''n'' such random variables equals a given value ''u'': \varphi_\text=\varphi^n=u Assuming for the moment that ''t'' → 0, we take the limit of the above as : \ln u =\lim_ n\ln\varphi =\lim_ na, t, ^\alpha\ln, t, . Therefore: \begin \ln(\ln u) & = \ln \left ( \lim_ na, t, ^\alpha\ln, t, \right ) \\ pt& = \lim_\ln \left ( na, t, ^\alpha\ln, t, \right ) = \lim_ \left t, +\ln (\ln , t, ) \right \end This shows that \ln, t, is asymptotic to \tfrac\ln n, so using the previous equation we have , t, \sim \left ( \frac \right )^. This implies that the sum divided by \left ( \frac \right ) ^ has a characteristic function whose value at some ''t′'' goes to ''u'' (as ''n'' increases) when t' = (-\ln u)^. In other words, the characteristic function converges pointwise to \exp(-(t')^\alpha) and therefore by
Lévy's continuity theorem In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their cha ...
the sum divided by \left ( \frac \right ) ^
converges in distribution In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to ...
to the symmetric alpha-stable distribution with stability parameter \alpha and scale parameter 1. This can be applied to a random variable whose tails decrease as , x, ^. This random variable has a mean but the variance is infinite. Let us take the following distribution: f(x)=\begin \frac 1 3 & , x, \leq 1 \\ \frac 1 3 x^ & , x, >1 \end We can write this as f(x)=\int_1^\infty\frac 2 h \left ( \frac \right ) dw where h \left (\frac \right )= \begin \frac 12 & \left, \frac x w\<1, \\ 0 & \left, \frac x w\>1. \end We want to find the leading terms of the asymptotic expansion of the characteristic function. The characteristic function of the probability distribution \frac h\left(\frac x w\right) is \tfrac, so the characteristic function for ''f''(''x'') is \varphi(t)=\int_1^\infty\fracdw and we can calculate: \begin \varphi(t)-1 &=\int_1^\infty\frac 2 \left frac -1 \right\,dw \\ &= \int_1^ \frac 2 \left frac -1 \right\,dw + \int_^ \frac 2 \left frac -1 \right\,dw \\ &= \int_1^ \frac 2 \left frac -1 + \left \ \right\,dw + \int_^ \frac 2 \left frac -1 \right\,dw \\ &= \int_1^ -\frac + \int_1^ \frac 2 \left frac -1 + \frac \rightdw +\int_^ \frac 2 \left frac -1 \rightdw \\ &= \int_1^ -\frac + \left \ +\int_^ \frac 2 \left frac -1 \rightdw \\ &= \int_1^ -\frac +t^2\int_0^1\frac 2\left frac-1+\frac6\righty -\int_0^1\frac 2\left frac-1+\frac6\rightw +t^2\int_1^\infty\frac 2\left fracy-1\righty \\ &= -\frac \int_1^ \frac + t^2 C_1 - \int_0^1\frac 2\left frac-1+\frac6\rightw + t^2 C_2 \\ &= \frac3\ln, t, + t^2 C_3 - \int_0^1\frac 2\left frac-1+\frac6\rightw \\ &= \frac3\ln, t, + t^2 C_3 - \int_0^1\frac 2\left \frac + \cdots \right dw \\ &= \frac3\ln, t, + t^2 C_3 - \mathcal \left ( t^4 \right ) \end where C_1, C_2 and C_3 are constants. Therefore, \varphi(t)\sim 1+ \frac\ln, t, and according to what was said above (and the fact that the variance of ''f''(''x'';2,0,1,0) is 2), the sum of ''n'' instances of this random variable, divided by \sqrt, will converge in distribution to a Gaussian distribution with variance 1. But the variance at any particular ''n'' will still be infinite. Note that the width of the limiting distribution grows faster than in the case where the random variable has a finite variance (in which case the width grows as the square root of ''n''). The average, obtained by dividing the sum by ''n'', tends toward a Gaussian whose width approaches zero as ''n'' increases, in accordance with the
Law of large numbers In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials shou ...
.


Special cases

There is no general analytic solution for the form of ''f''(''x''). There are, however three special cases which can be expressed in terms of
elementary functions In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and ...
as can be seen by inspection of the
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
: * For \alpha = 2 the distribution reduces to a Gaussian distribution with variance ''σ''2 = 2''c''2 and mean ''μ''; the skewness parameter \beta has no effect. * For \alpha = 1 and \beta = 0 the distribution reduces to a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
with scale parameter ''c'' and shift parameter ''μ''. * For \alpha = 1/2 and \beta = 1 the distribution reduces to a
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is k ...
with scale parameter ''c'' and shift parameter ''μ''. Note that the above three distributions are also connected, in the following way: A standard Cauchy random variable can be viewed as a
mixture In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the ...
of Gaussian random variables (all with mean zero), with the variance being drawn from a standard Lévy distribution. And in fact this is a special case of a more general theorem (See p. 59 of ) which allows any symmetric alpha-stable distribution to be viewed in this way (with the alpha parameter of the mixture distribution equal to twice the alpha parameter of the mixing distribution—and the beta parameter of the mixing distribution always equal to one). A general closed form expression for stable PDFs with rational values of \alpha is available in terms of
Meijer G-function In mathematics, the G-function was introduced by as a very general function intended to include most of the known special functions as particular cases. This was not the only attempt of its kind: the generalized hypergeometric function and the M ...
s. Fox H-Functions can also be used to express the stable probability density functions. For simple rational numbers, the closed form expression is often in terms of less complicated
special functions Special functions are particular mathematical functions that have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, or other applications. The term is defined by ...
. Several closed form expressions having rather simple expressions in terms of special functions are available. In the table below, PDFs expressible by elementary functions are indicated by an ''E'' and those that are expressible by special functions are indicated by an ''s''. Some of the special cases are known by particular names: * For \alpha = 1 and \beta = 1, the distribution is a
Landau distribution In probability theory, the Landau distribution is a probability distribution named after Lev Landau. Because of the distribution's "fat" tail, the moments of the distribution, like mean or variance, are undefined. The distribution is a particular ...
(L) which has a specific usage in physics under this name. * For \alpha = 3/2 and \beta = 0 the distribution reduces to a
Holtsmark distribution The (one-dimensional) Holtsmark distribution is a continuous probability distribution. The Holtsmark distribution is a special case of a stable distribution with the index of stability or shape parameter \alpha equal to 3/2 and the skewness parame ...
with scale parameter ''c'' and shift parameter ''μ''. Also, in the limit as ''c'' approaches zero or as α approaches zero the distribution will approach a
Dirac delta function In mathematics, the Dirac delta distribution ( distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire ...
.


Series representation

The stable distribution can be restated as the real part of a simpler integral: f(x;\alpha,\beta,c,\mu)=\frac\Re\left \int_0^\infty e^e^\,dt\right Expressing the second exponential as a
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
, we have: f(x;\alpha,\beta,c,\mu)=\frac\Re\left \int_0^\infty e^\sum_^\infty\frac\,dt\right/math> where q=c^\alpha(1-i\beta\Phi). Reversing the order of integration and summation, and carrying out the integration yields: f(x;\alpha,\beta,c,\mu)=\frac\Re\left \sum_^\infty\frac\left(\frac\right)^\Gamma(\alpha n+1)\right/math> which will be valid for ''x'' ≠ ''μ'' and will converge for appropriate values of the parameters. (Note that the ''n'' = 0 term which yields a
delta function In mathematics, the Dirac delta distribution ( distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire ...
in ''x'' − ''μ'' has therefore been dropped.) Expressing the first exponential as a series will yield another series in positive powers of ''x'' − ''μ'' which is generally less useful. For one-sided stable distribution, the above series expansion needs to be modified, since q=\exp(-i\alpha\pi/2) and q i^=1. There is no real part to sum. Instead, the integral of the characteristic function should be carried out on the negative axis, which yields: \begin L_\alpha(x) & = \frac\Re\left \sum_^\infty\frac\left(\frac\right)^\Gamma(\alpha n+1)\right\\ & = \frac\sum_^\infty\frac\left(\frac\right)^\Gamma(\alpha n+1) \end


Simulation of stable variables

Simulating sequences of stable random variables is not straightforward, since there are no analytic expressions for the inverse F^(x) nor the CDF F(x) itself. All standard approaches like the rejection or the inversion methods would require tedious computations. A much more elegant and efficient solution was proposed by Chambers, Mallows and Stuck (CMS), who noticed that a certain integral formula yielded the following algorithm: * generate a random variable U uniformly distributed on \left (-\tfrac,\tfrac \right ) and an independent exponential random variable W with mean 1; * for \alpha\ne 1 compute: X = \left (1+\zeta^2 \right )^\frac \frac \left (\frac \right )^\frac, * for \alpha=1 compute: X = \frac\left\, where \zeta = -\beta\tan\frac, \qquad \xi =\begin \frac \arctan(-\zeta) & \alpha \ne 1 \\ \frac & \alpha=1 \end This algorithm yields a random variable X\sim S_\alpha(\beta,1,0). For a detailed proof see. Given the formulas for simulation of a standard stable random variable, we can easily simulate a stable random variable for all admissible values of the parameters \alpha, c, \beta and \mu using the following property. If X \sim S_\alpha(\beta,1,0) then Y = \begin c X+\mu & \alpha \ne 1 \\ c X+\frac\beta c\log c + \mu & \alpha = 1 \end is S_\alpha(\beta,c,\mu). For \alpha = 2 (and \beta = 0) the CMS method reduces to the well known Box-Muller transform for generating
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
random variables. Many other approaches have been proposed in the literature, including application of Bergström and LePage series expansions, see and, respectively. However, the CMS method is regarded as the fastest and the most accurate.


Applications

Stable distributions owe their importance in both theory and practice to the generalization of the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselv ...
to random variables without second (and possibly first) order moments and the accompanying
self-similarity __NOTOC__ In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such as coastlines, are statistically se ...
of the stable family. It was the seeming departure from normality along with the demand for a self-similar model for financial data (i.e. the shape of the distribution for yearly asset price changes should resemble that of the constituent daily or monthly price changes) that led
Benoît Mandelbrot Benoit B. Mandelbrot (20 November 1924 – 14 October 2010) was a Polish-born French-American mathematician and polymath with broad interests in the practical sciences, especially regarding what he labeled as "the art of roughness" of phy ...
to propose that cotton prices follow an alpha-stable distribution with \alpha equal to 1.7.
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is k ...
s are frequently found in analysis of
critical behavior In physics, critical phenomena is the collective name associated with the physics of critical points. Most of them stem from the divergence of the correlation length, but also the dynamics slows down. Critical phenomena include scaling relation ...
and financial data. They are also found in
spectroscopy Spectroscopy is the field of study that measures and interprets the electromagnetic spectra that result from the interaction between electromagnetic radiation and matter as a function of the wavelength or frequency of the radiation. Matter wa ...
as a general expression for a quasistatically pressure broadened spectral line. The Lévy distribution of solar flare waiting time events (time between flare events) was demonstrated for
CGRO The Compton Gamma Ray Observatory (CGRO) was a space observatory detecting photons with energies from 20 k eV to 30 GeV, in Earth orbit from 1991 to 2000. The observatory featured four main telescopes in one spacecraft, covering X-ra ...
BATSE hard x-ray solar flares in December 2001. Analysis of the Lévy statistical signature revealed that two different memory signatures were evident; one related to the solar cycle and the second whose origin appears to be associated with a localized or combination of localized solar active region effects.


Other analytic cases

A number of cases of analytically expressible stable distributions are known. Let the stable distribution be expressed by f(x;\alpha,\beta,c,\mu) then we know: * The
Cauchy Distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
is given by f(x;1,0,1,0). * The
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is k ...
is given by f(x;\tfrac,1,1,0). * The
Normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
is given by f(x;2,0,1,0). * Let S_(z) be a
Lommel function The Lommel differential equation, named after Eugen von Lommel, is an inhomogeneous form of the Bessel differential equation: : z^2 \frac + z \frac + (z^2 - \nu^2)y = z^. Solutions are given by the Lommel functions ''s''μ,ν(''z'') and ''S' ...
, then: f \left (x;\tfrac,0,1,0\right ) = \Re\left ( \frac \frac S_ \left (\frac \frac \right) \right ) * Let S(x) and C(x) denote the
Fresnel Integrals 250px, Plots of and . The maximum of is about . If the integrands of and were defined using instead of , then the image would be scaled vertically and horizontally (see below). The Fresnel integrals and are two transcendental functions n ...
then: f\left (x;\tfrac,0,1,0\right ) = \frac\left (\sin\left(\tfrac\right) \left frac - S\left (\tfrac\right )\right \cos\left(\tfrac \right) \left frac-C\left (\tfrac\right )\right right ) * Let K_v(x) be the
modified Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
of the second kind then: f\left (x;\tfrac,1,1,0\right ) = \frac \frac \frac K_\left (\frac \frac \right ) * If the _mF_n denote the hypergeometric functions then: \begin f\left (x;\tfrac,0,1,0\right ) &= \frac \frac _2F_2 \left ( \tfrac, \tfrac; \tfrac, \tfrac; \tfrac \right ) - \frac \frac _2F_2 \left ( \tfrac, \tfrac; \tfrac, \tfrac; \tfrac \right ) \\ ptf\left (x;\tfrac,0,1,0\right ) &= \frac _2F_3 \left ( \tfrac, \tfrac; \tfrac, \tfrac, \tfrac; - \tfrac \right ) - \frac _3F_4 \left ( \tfrac, 1, \tfrac; \tfrac, \tfrac, \tfrac, \tfrac; - \tfrac \right ) + \frac _2F_3 \left ( \tfrac, \tfrac; \tfrac, \tfrac, \tfrac; -\tfrac \right) \end with the latter being the
Holtsmark distribution The (one-dimensional) Holtsmark distribution is a continuous probability distribution. The Holtsmark distribution is a special case of a stable distribution with the index of stability or shape parameter \alpha equal to 3/2 and the skewness parame ...
. * Let W_(z) be a
Whittaker function In mathematics, a Whittaker function is a special solution of Whittaker's equation, a modified form of the confluent hypergeometric equation introduced by to make the formulas involving the solutions more symmetric. More generally, introduced W ...
, then: \begin f\left (x;\tfrac,0,1,0\right ) &= \frac \exp\left (\tfracx^\right ) W_\left (\tfracx^\right ) \\ ptf\left (x;\tfrac,1,1,0\right ) &= \frac \exp\left (-\tfracx^\right ) W_ \left (\tfracx^\right ) \\ ptf\left (x;\tfrac,1,1,0\right ) &= \begin \frac \exp\left (\fracx^3\right ) W_\left (- \fracx^3\right ) & x<0\\ \\ \frac \exp\left (\fracx^3\right ) W_\left (\fracx^3\right ) & x \geq 0 \end \end


See also

*
Lévy flight A Lévy flight is a random walk in which the step-lengths have a Lévy distribution, a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random direct ...
*
Lévy process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which disp ...
* Other "power law" distributions **
Pareto distribution The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto ( ), is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actua ...
**
Zeta distribution In probability theory and statistics, the zeta distribution is a discrete probability distribution. If ''X'' is a zeta-distributed random variable with parameter ''s'', then the probability that ''X'' takes the integer value ''k'' is given by ...
** Zipf distribution ** Zipf–Mandelbrot distribution *
Financial models with long-tailed distributions and volatility clustering Finance is the study and discipline of money, currency and capital assets. It is related to, but not synonymous with economics, the study of production, distribution, and consumption of money, assets, goods and services (the discipline of fina ...
*
Multivariate stable distribution The multivariate stable distribution is a multivariate probability distribution that is a multivariate generalisation of the univariate stable distribution. The multivariate stable distribution defines linear relations between stable distribution ...
* Discrete-stable distribution


Notes

* The STABLE program for Windows is available from John Nolan's stable webpage: http://www.robustanalysis.com/public/stable.html. It calculates the density (pdf), cumulative distribution function (cdf) and quantiles for a general stable distribution, and performs maximum likelihood estimation of stable parameters and some exploratory data analysis techniques for assessing the fit of a data set.
libstable
is a C implementation for the Stable distribution pdf, cdf, random number, quantile and fitting functions (along with a benchmark replication package and an R package). * R Packag
'stabledist'
by Diethelm Wuertz, Martin Maechler and Rmetrics core team members. Computes stable density, probability, quantiles, and random numbers. Updated Sept. 12, 2016. *
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
implementation is located i
scipy.stats.levy_stable
in the
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal ...
package.


References

{{ProbDistributions, continuous-infinite Continuous distributions Probability distributions with non-finite variance Power laws Stability (probability)