Fisher–Tippett Distribution
   HOME

TheInfoList



OR:

In probability theory and
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, the generalized extreme value (GEV) distribution is a family of continuous
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
s developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables. In some fields of application the generalized extreme value distribution is known as the Fisher–Tippett distribution, named after Ronald Fisher and
L. H. C. Tippett Leonard Henry Caleb Tippett (8 May 1902 – 9 November 1985), known professionally as L. H. C. Tippett, was an English statistician. Tippett was born in London but spent most of his early life in Cornwall and attended St Austell County Gramma ...
who recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of the Gumbel distribution. The origin of the common functional form for all 3 distributions dates back to at least Jenkinson, A. F. (1955), though allegedly it could also have been given by von Mises, R. (1936).


Specification

Using the standardized variable s = (x - \mu)/\sigma\,, where \mu\,, the location parameter, can be any real number, and \sigma > 0 is the scale parameter; the cumulative distribution function of the GEV distribution is then :F(s; \xi) = \begin \exp\Bigl(-\exp(-s)\Bigr) & ~~ \text ~~ \xi = 0 \\ \\ \exp\Bigl(-(1+\xi s)^\Bigr) & ~~ \text ~~ \xi \neq 0 ~~ \text ~~ \xi \, s > -1 \\ \\ 0 & ~~ \text ~~ \xi > 0 ~~ \text ~~ \xi\, s \le -1 \\ \\ 1 & ~~ \text ~~ \xi < 0 ~~ \text ~~ \xi\, s \le -1 ~, \end where \xi\,, the shape parameter, can be any real number. Thus, for \xi > 0, the expression is valid for s > -1/\xi\,, while for \xi < 0 it is valid for s < -1/\xi\,. In the first case, -1/\xi is the negative, lower end-point, where F is 0; in the second case, -1/\xi is the positive, upper end-point, where F is 1. For \xi = 0 the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, as \xi \to 0 in which case s can be any real number. In the special case of x =\mu\,, so s = 0 and F(0; \xi) = \exp(-1)0.368 for whatever values \xi and \sigma might have. The probability density function of the standardized distribution is :f(s;\xi) = \begin \exp(-s) \exp\Bigl(-\exp(-s)\Bigr) & ~~ \text ~~ \xi = 0 \\ \\ \Bigl(1+\xi s\Bigr)^ \exp\Bigl(-(1+\xi s)^\Bigr) & ~~ \text ~~ \xi \neq 0 ~~ \text ~~ \xi \, s > -1 \\ \\ 0 & ~~ \text \end again valid for s > -1/\xi in the case \xi > 0\,, and for s < -1/\xi in the case \xi < 0\,. The density is zero outside of the relevant range. In the case \xi = 0 the density is positive on the whole real line. Since the cumulative distribution function is invertible, the quantile function for the GEV distribution has an explicit expression, namely :Q(p;\mu,\sigma,\xi) = \begin \mu - \sigma\log\Bigl(-\log\left(p\right)\,\Bigr) & ~ \text ~ \xi = 0 ~ \text ~ p \in \left(0,1\right) \\ \\ \mu + \displaystyle\left( \Bigl(-\log(p)\,\Bigr)^ - 1\right) & ~ \text ~ \xi > 0 ~ \text ~ p \in \left ,1\right) \\ & ~~ \text ~ \, \xi < 0 ~ \text ~ p \in (0,1;,\end and therefore the quantile density function \left(q \equiv \frac\right) is :q(p;\sigma,\xi) = \frac \quad \text ~~ p \in \left(0,1\right)\;, valid for ~\sigma > 0~ and for any real ~\xi\;.


Summary statistics

Some simple statistics of the distribution are: :\operatorname(X) = \mu + \left(g_1-1\right)\frac for \xi < 1 :\operatorname(X) = \left(g_2-g_1^2\right)\frac , :\operatorname(X) = \mu+\frac 1+\xi)^-1. The skewness is for ξ>0 :\operatorname(X) = \frac For ξ<0, the sign of the numerator is reversed. The excess kurtosis is: :\operatorname(X) = \frac-3 . where g_k=\Gamma(1-k\xi), k=1,2,3,4, and \Gamma(t) is the
gamma function In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except ...
.


Link to Fréchet, Weibull and Gumbel families

The shape parameter \xi governs the tail behavior of the distribution. The sub-families defined by \xi= 0, \xi>0 and \xi<0 correspond, respectively, to the Gumbel, Fréchet and Weibull families, whose cumulative distribution functions are displayed below. * Gumbel or type I extreme value distribution (\xi=0) : F(x;\mu,\sigma,0)=e^\;\;\; \text \;\; x\in\mathbb R. * Fréchet or type II extreme value distribution, if \xi=\alpha^>0 and y = 1 + \xi (x-\mu)/\sigma : F(x;\mu,\sigma,\xi)=\begin e^ & y > 0 \\ 0 & y \leq 0. \end * Reversed Weibull or type III extreme value distribution, if \xi=-\alpha^<0 and y = - \left( 1 + \xi (x-\mu)/\sigma \right) : F(x;\mu,\sigma,\xi)=\begin e^ & y<0 \\ 1 & y\geq 0 \end The subsections below remark on properties of these distributions.


Modification for minima rather than maxima

The theory here relates to data maxima and the distribution being discussed is an extreme value distribution for maxima. A generalised extreme value distribution for data minima can be obtained, for example by substituting (−''x'') for ''x'' in the distribution function, and subtracting from one: this yields a separate family of distributions.


Alternative convention for the Weibull distribution

The ordinary Weibull distribution arises in reliability applications and is obtained from the distribution here by using the variable t = \mu - x , which gives a strictly positive support - in contrast to the use in the extreme value theory here. This arises because the ordinary Weibull distribution is used in cases that deal with data minima rather than data maxima. The distribution here has an addition parameter compared to the usual form of the Weibull distribution and, in addition, is reversed so that the distribution has an upper bound rather than a lower bound. Importantly, in applications of the GEV, the upper bound is unknown and so must be estimated, while when applying the ordinary Weibull distribution in reliability applications the lower bound is usually known to be zero.


Ranges of the distributions

Note the differences in the ranges of interest for the three extreme value distributions: Gumbel is unlimited, Fréchet has a lower limit, while the reversed Weibull has an upper limit. More precisely, Extreme Value Theory (Univariate Theory) describes which of the three is the limiting law according to the initial law X and in particular depending on its tail.


Distribution of log variables

One can link the type I to types II and III in the following way: if the cumulative distribution function of some random variable X is of type II, and with the positive numbers as support, i.e. F(x; 0, \sigma, \alpha), then the cumulative distribution function of \ln X is of type I, namely F(x; \ln \sigma, 1/\alpha, 0). Similarly, if the cumulative distribution function of X is of type III, and with the negative numbers as support, i.e. F(x; 0, \sigma, -\alpha), then the cumulative distribution function of \ln (-X) is of type I, namely F(x; -\ln \sigma, 1/\alpha, 0).


Link to logit models (logistic regression)

Multinomial logit models, and certain other types of logistic regression, can be phrased as latent variable models with
error variable In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is c ...
s distributed as Gumbel distributions (type I generalized extreme value distributions). This phrasing is common in the theory of
discrete choice In economics, discrete choice models, or qualitative choice models, describe, explain, and predict choices between two or more discrete alternatives, such as entering or not entering the labor market, or choosing between modes of transport. Such ...
models, which include logit models, probit models, and various extensions of them, and derives from the fact that the difference of two type-I GEV-distributed variables follows a logistic distribution, of which the logit function is the
quantile function In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equ ...
. The type-I GEV distribution thus plays the same role in these logit models as the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
does in the corresponding probit models.


Properties

The
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of the generalized extreme value distribution solves the stability postulate equation. The generalized extreme value distribution is a special case of a max-stable distribution, and is a transformation of a min-stable distribution.


Applications

*The GEV distribution is widely used in the treatment of "tail risks" in fields ranging from insurance to finance. In the latter case, it has been considered as a means of assessing various financial risks via metrics such as value at risk. *However, the resulting shape parameters have been found to lie in the range leading to undefined means and variances, which underlines the fact that reliable data analysis is often impossible.Kjersti Aas, lecture, NTNU, Trondheim, 23 Jan 2008
/ref> * In hydrology the GEV distribution is applied to extreme events such as annual maximum one-day rainfalls and river discharges. The blue picture, made with CumFreq, illustrates an example of fitting the GEV distribution to ranked annually maximum one-day rainfalls showing also the 90%
confidence belt In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as ...
based on the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no quest ...
. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis.


Example for Normally distributed variables

Let (X_i)_ be i.i.d. normally distributed random variables with mean 0 and variance 1. The Fisher–Tippett–Gnedenko theorem tells us that \max_ X_i \sim GEV(\mu_n, \sigma_n, 0), where \begin \mu_n &= \Phi^\left(1-\frac \right) \\ \sigma_n &= \Phi^\left(1-\frac \cdot \mathrm^\right)- \Phi^\left(1-\frac \right) \end . This allow us to estimate e.g. the mean of \max_ X_i from the mean of the GEV distribution: \begin E\left max_ X_i\right&\approx \mu_n+\gamma\sigma_n \\&=(1-\gamma)\Phi^(1-1/n)+\gamma\Phi^(1-1/(en)) \\&= \sqrt \cdot \left(1 + \frac + \mathcal \left(\frac \right) \right) \end, where \gamma is the Euler–Mascheroni constant.


Related distributions

# If X \sim \textrm(\mu,\,\sigma,\,\xi) then mX+b \sim \textrm(m\mu+b,\,m\sigma,\,\xi) # If X \sim \textrm(\mu,\,\sigma) ( Gumbel distribution) then X \sim \textrm(\mu,\,\sigma,\,0) # If X \sim \textrm(\sigma,\,\mu) ( Weibull distribution) then \mu\left(1-\sigma\mathrm\right) \sim \textrm(\mu,\,\sigma,\,0) # If X \sim \textrm(\mu,\,\sigma,\,0) then \sigma \exp (-\tfrac ) \sim \textrm(\sigma,\,\mu) ( Weibull distribution) # If X \sim \textrm(1)\, (
Exponential distribution In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average ...
) then \mu - \sigma \log \sim \textrm(\mu,\,\sigma,\,0) # If X \sim \mathrm(\alpha_X, \beta) and Y \sim \mathrm(\alpha_Y, \beta) then X-Y \sim \mathrm(\alpha_X-\alpha_Y,\beta) \, (see Logistic_distribution). # If X and Y \sim \mathrm(\alpha, \beta) then X+Y \nsim \mathrm(2 \alpha,\beta) \, (The sum is ''not'' a logistic distribution). Note that E(X+Y) = 2\alpha+2\beta\gamma \neq 2\alpha = E\left(\mathrm(2 \alpha,\beta) \right) .


Proofs

4. Let X \sim \textrm(\sigma,\,\mu), then the cumulative distribution of g(x) = \mu\left(1-\sigma\mathrm\right) is: : \begin P(\mu \left(1-\sigma\log\frac\right) < x) &= P\left(\log\frac < \frac \right) \\ &\text \\ &= P\left(X < \sigma \exp\left \frac \right\right) \\ &= 1 - \exp\left( - \left(\cancel \exp\left \frac \right\cdot \cancel \right)^\mu \right) \\ &= 1 - \exp\left( - \left( \exp\left \frac \right\right)^\cancel \right) \\ &= 1 - \exp\left( - \exp\left \frac \right\right) \\ &= 1 - \exp\left( - \exp\left - s \right\right), \quad s = \frac \end which is the cdf for \sim \textrm(\mu,\,\sigma,\,0). 5. Let X \sim \textrm(1), then the cumulative distribution of g(X) = \mu - \sigma \log is: : \begin P(\mu - \sigma \log < x) &= P\left(\log(X) < \frac\right) \\ &\text \\ &= P\left(X < \exp\left( \frac \right)\right) \\ &= 1 - \exp\left \exp\left(\frac\right) \right\\ &= 1 - \exp\left \exp\left(- s \right) \right \quad s = \frac \end which is the cumulative distribution of \textrm(\mu, \sigma, 0).


See also

* Extreme Value Theory (Univariate Theory) * Fisher–Tippett–Gnedenko theorem * Generalized Pareto distribution * German tank problem, opposite question of population maximum given sample maximum * Pickands–Balkema–De Haan theorem


References


Further reading

* * * * {{ProbDistributions, continuous-variable Continuous distributions Extreme value data Location-scale family probability distributions Stability (probability)