HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
of a real-valued random variable. Like skewness, kurtosis describes a particular aspect of a probability distribution. There are different ways to quantify kurtosis for a theoretical distribution, and there are corresponding ways of estimating it using a sample from a population. Different measures of kurtosis may have different interpretations. The standard measure of a distribution's kurtosis, originating with Karl Pearson, is a scaled version of the fourth
moment Moment or Moments may refer to: * Present time Music * The Moments, American R&B vocal group Albums * ''Moment'' (Dark Tranquillity album), 2020 * ''Moment'' (Speed album), 1998 * ''Moments'' (Darude album) * ''Moments'' (Christine Guldbrand ...
of the distribution. This number is related to the tails of the distribution, not its peak; hence, the sometimes-seen characterization of kurtosis as " peakedness" is incorrect. For this measure, higher kurtosis corresponds to greater extremity of deviations (or outliers), and not the configuration of data near the mean. It is common to compare the excess kurtosis (defined below) of a distribution to 0, which is the excess kurtosis of any univariate
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. Distributions with negative excess kurtosis are said to be ''platykurtic'', although this does not imply the distribution is "flat-topped" as is sometimes stated. Rather, it means the distribution produces fewer and/or less extreme outliers than the normal distribution. An example of a platykurtic distribution is the
uniform distribution Uniform distribution may refer to: * Continuous uniform distribution * Discrete uniform distribution * Uniform distribution (ecology) * Equidistributed sequence See also * * Homogeneous distribution In mathematics, a homogeneous distribution ...
, which does not produce outliers. Distributions with a positive excess kurtosis are said to be ''leptokurtic''. An example of a leptokurtic distribution is the Laplace distribution, which has tails that asymptotically approach zero more slowly than a Gaussian, and therefore produces more outliers than the normal distribution. It is common practice to use excess kurtosis, which is defined as Pearson's kurtosis minus 3, to provide a simple comparison to the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. Some authors and software packages use "kurtosis" by itself to refer to the excess kurtosis. For clarity and generality, however, this article explicitly indicates where non-excess kurtosis is meant. Alternative measures of kurtosis are: the L-kurtosis, which is a scaled version of the fourth L-moment; measures based on four population or sample
quantiles In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. There is one fewer quantile t ...
. These are analogous to the alternative measures of skewness that are not based on ordinary moments.


Pearson moments

The kurtosis is the fourth standardized moment, defined as : \operatorname = \operatorname\left left(\frac\right)^4\right= \frac = \frac, where ''μ''4 is the fourth central moment and σ is the standard deviation. Several letters are used in the literature to denote the kurtosis. A very common choice is ''κ'', which is fine as long as it is clear that it does not refer to a cumulant. Other choices include ''γ''2, to be similar to the notation for skewness, although sometimes this is instead reserved for the excess kurtosis. The kurtosis is bounded below by the squared skewness plus 1: : \frac \geq \left(\frac\right)^2 + 1, where ''μ''3 is the third central moment. The lower bound is realized by the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
. There is no upper limit to the kurtosis of a general probability distribution, and it may be infinite. A reason why some authors favor the excess kurtosis is that cumulants are extensive. Formulas related to the extensive property are more naturally expressed in terms of the excess kurtosis. For example, let ''X''1, ..., ''X''''n'' be independent random variables for which the fourth moment exists, and let ''Y'' be the random variable defined by the sum of the ''X''''i''. The excess kurtosis of ''Y'' is :\operatorname - 3 = \frac \sum_^n \sigma_i^ \cdot \left(\operatorname\left _i\right- 3\right), where \sigma_i is the standard deviation of X_i. In particular if all of the ''X''''i'' have the same variance, then this simplifies to :\operatorname - 3 = \sum_^n \left(\operatorname\left _i\right- 3\right). The reason not to subtract 3 is that the bare fourth moment better generalizes to multivariate distributions, especially when independence is not assumed. The cokurtosis between pairs of variables is an order four
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
. For a bivariate normal distribution, the cokurtosis tensor has off-diagonal terms that are neither 0 nor 3 in general, so attempting to "correct" for an excess becomes confusing. It is true, however, that the joint cumulants of degree greater than two for any multivariate normal distribution are zero. For two random variables, ''X'' and ''Y'', not necessarily independent, the kurtosis of the sum, ''X'' + ''Y'', is : \begin \operatorname +Y= \big( & \sigma_X^4\operatorname + 4\sigma_X^3\sigma_Y\operatorname ,X,X,Y\\ & + 6\sigma_X^2\sigma_Y^2\operatorname ,X,Y,Y\\ pt& + 4\sigma_X\sigma_Y^3\operatorname ,Y,Y,Y+ \sigma_Y^4\operatorname \big). \end Note that the
binomial coefficient In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written \tbinom. It is the coefficient of the t ...
s appear in the above equation.


Interpretation

The exact interpretation of the Pearson measure of kurtosis (or excess kurtosis) used to be disputed, but is now settled. As Westfall notes in 2014, ''"...its only unambiguous interpretation is in terms of tail extremity; i.e., either existing outliers (for the sample kurtosis) or propensity to produce outliers (for the kurtosis of a probability distribution)."'' The logic is simple: Kurtosis is the average (or
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
) of the standardized data raised to the fourth power. Standardized values that are less than 1 (i.e., data within one standard deviation of the mean, where the "peak" would be) contribute virtually nothing to kurtosis, since raising a number that is less than 1 to the fourth power makes it closer to zero. The only data values (observed or observable) that contribute to kurtosis in any meaningful way are those outside the region of the peak; i.e., the outliers. Therefore, kurtosis measures outliers only; it measures nothing about the "peak". Many incorrect interpretations of kurtosis that involve notions of peakedness have been given. One is that kurtosis measures both the "peakedness" of the distribution and the heaviness of its tail. Various other incorrect interpretations have been suggested, such as "lack of shoulders" (where the "shoulder" is defined vaguely as the area between the peak and the tail, or more specifically as the area about one standard deviation from the mean) or "bimodality". Balanda and MacGillivray assert that the standard definition of kurtosis "is a poor measure of the kurtosis, peakedness, or tail weight of a distribution" and instead propose to "define kurtosis vaguely as the location- and scale-free movement of probability mass from the shoulders of a distribution into its center and tails".


Moors' interpretation

In 1986 Moors gave an interpretation of kurtosis. Let : Z = \frac \sigma, where ''X'' is a random variable, ''μ'' is the mean and ''σ'' is the standard deviation. Now by definition of the kurtosis \kappa , and by the well-known identity E\left ^2\right= \operatorname + [V^2,_ :\kappa_= __E\left[_Z^4_\right.html" ;"title=".html" ;"title="[V">[V^2, :\kappa = E\left[ Z^4 \right">.html" ;"title="[V">[V^2, :\kappa = E\left[ Z^4 \right= \operatorname\left[ Z^2 \right] + \left[E\left[Z^2\right]\right]^2 = \operatorname\left[ Z^2 \right] + [\operatorname[Z^2 = \operatorname\left[ Z^2 \right] + 1 . The kurtosis can now be seen as a measure of the dispersion of ''Z''2 around its expectation. Alternatively it can be seen to be a measure of the dispersion of ''Z'' around +1 and −1. ''κ'' attains its minimal value in a symmetric two-point distribution. In terms of the original variable ''X'', the kurtosis is a measure of the dispersion of ''X'' around the two values ''μ'' ± ''σ''. High values of ''κ'' arise in two circumstances: * where the probability mass is concentrated around the mean and the data-generating process produces occasional values far from the mean, * where the probability mass is concentrated in the tails of the distribution.


Excess kurtosis

The ''excess kurtosis'' is defined as kurtosis minus 3. There are 3 distinct regimes as described below.


Mesokurtic

Distributions with zero excess kurtosis are called mesokurtic, or mesokurtotic. The most prominent example of a mesokurtic distribution is the normal distribution family, regardless of the values of its
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s. A few other well-known distributions can be mesokurtic, depending on parameter values: for example, the binomial distribution is mesokurtic for p = 1/2 \pm \sqrt.


Leptokurtic

A distribution with positive excess kurtosis is called leptokurtic, or leptokurtotic. "Lepto-" means "slender". In terms of shape, a leptokurtic distribution has '' fatter tails''. Examples of leptokurtic distributions include the
Student's t-distribution In probability and statistics, Student's ''t''-distribution (or simply the ''t''-distribution) is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situ ...
, Rayleigh distribution, Laplace distribution,
exponential distribution In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average ...
, Poisson distribution and the logistic distribution. Such distributions are sometimes termed ''super-Gaussian''.


Platykurtic

A distribution with negative excess kurtosis is called platykurtic, or platykurtotic. "Platy-" means "broad". In terms of shape, a platykurtic distribution has ''thinner tails''. Examples of platykurtic distributions include the
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
and discrete uniform distributions, and the raised cosine distribution. The most platykurtic distribution of all is the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
with ''p'' = 1/2 (for example the number of times one obtains "heads" when flipping a coin once, a coin toss), for which the excess kurtosis is −2.


Graphical examples


The Pearson type VII family

The effects of kurtosis are illustrated using a
parametric family In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters. Common examples are parametrized (fam ...
of distributions whose kurtosis can be adjusted while their lower-order moments and cumulants remain constant. Consider the Pearson type VII family, which is a special case of the Pearson type IV family restricted to symmetric densities. The
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
is given by :f(x; a, m) = \frac \left +\left(\frac\right)^2 \right, \! where ''a'' is a scale parameter and ''m'' is a shape parameter. All densities in this family are symmetric. The ''k''th moment exists provided ''m'' > (''k'' + 1)/2. For the kurtosis to exist, we require ''m'' > 5/2. Then the mean and skewness exist and are both identically zero. Setting ''a''2 = 2''m'' − 3 makes the variance equal to unity. Then the only free parameter is ''m'', which controls the fourth moment (and cumulant) and hence the kurtosis. One can reparameterize with m = 5/2 + 3/\gamma_2, where \gamma_2 is the excess kurtosis as defined above. This yields a one-parameter leptokurtic family with zero mean, unit variance, zero skewness, and arbitrary non-negative excess kurtosis. The reparameterized density is :g(x; \gamma_2) = f\left(x;\; a = \sqrt,\; m = \frac + \frac\right). \! In the limit as \gamma_2 \to \infty one obtains the density :g(x) = 3\left(2 + x^2\right)^, \! which is shown as the red curve in the images on the right. In the other direction as \gamma_2 \to 0 one obtains the standard normal density as the limiting distribution, shown as the black curve. In the images on the right, the blue curve represents the density x \mapsto g(x; 2) with excess kurtosis of 2. The top image shows that leptokurtic densities in this family have a higher peak than the mesokurtic normal density, although this conclusion is only valid for this select family of distributions. The comparatively fatter tails of the leptokurtic densities are illustrated in the second image, which plots the natural logarithm of the Pearson type VII densities: the black curve is the logarithm of the standard normal density, which is a
parabola In mathematics, a parabola is a plane curve which is mirror-symmetrical and is approximately U-shaped. It fits several superficially different mathematical descriptions, which can all be proved to define exactly the same curves. One descri ...
. One can see that the normal density allocates little probability mass to the regions far from the mean ("has thin tails"), compared with the blue curve of the leptokurtic Pearson type VII density with excess kurtosis of 2. Between the blue curve and the black are other Pearson type VII densities with ''γ''2 = 1, 1/2, 1/4, 1/8, and 1/16. The red curve again shows the upper limit of the Pearson type VII family, with \gamma_2 = \infty (which, strictly speaking, means that the fourth moment does not exist). The red curve decreases the slowest as one moves outward from the origin ("has fat tails").


Other well-known distributions

Several well-known, unimodal, and symmetric distributions from different parametric families are compared here. Each has a mean and skewness of zero. The parameters have been chosen to result in a variance equal to 1 in each case. The images on the right show curves for the following seven densities, on a linear scale and logarithmic scale: * D: Laplace distribution, also known as the double exponential distribution, red curve (two straight lines in the log-scale plot), excess kurtosis = 3 * S: hyperbolic secant distribution, orange curve, excess kurtosis = 2 * L: logistic distribution, green curve, excess kurtosis = 1.2 * N:
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, black curve (inverted parabola in the log-scale plot), excess kurtosis = 0 * C: raised cosine distribution, cyan curve, excess kurtosis = −0.593762... * W: Wigner semicircle distribution, blue curve, excess kurtosis = −1 * U:
uniform distribution Uniform distribution may refer to: * Continuous uniform distribution * Discrete uniform distribution * Uniform distribution (ecology) * Equidistributed sequence See also * * Homogeneous distribution In mathematics, a homogeneous distribution ...
, magenta curve (shown for clarity as a rectangle in both images), excess kurtosis = −1.2. Note that in these cases the platykurtic densities have bounded
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
, whereas the densities with positive or zero excess kurtosis are supported on the whole
real line In elementary mathematics, a number line is a picture of a graduated straight line that serves as visual representation of the real numbers. Every point of a number line is assumed to correspond to a real number, and every real number to a po ...
. One cannot infer that high or low kurtosis distributions have the characteristics indicated by these examples. There exist platykurtic densities with infinite support, *e.g.,
exponential power distribution The generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To di ...
s with sufficiently large shape parameter ''b'' and there exist leptokurtic densities with finite support. *e.g., a distribution that is uniform between −3 and −0.3, between −0.3 and 0.3, and between 0.3 and 3, with the same density in the (−3, −0.3) and (0.3, 3) intervals, but with 20 times more density in the (−0.3, 0.3) interval Also, there exist platykurtic densities with infinite peakedness, *e.g., an equal mixture of the
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
with parameters 0.5 and 1 with its reflection about 0.0 and there exist leptokurtic densities that appear flat-topped, *e.g., a mixture of distribution that is uniform between -1 and 1 with a T(4.0000001)
Student's t-distribution In probability and statistics, Student's ''t''-distribution (or simply the ''t''-distribution) is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situ ...
, with mixing probabilities 0.999 and 0.001.


Sample kurtosis


Definitions


A natural but biased estimator

For a
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of ...
of ''n'' values, a method of moments estimator of the population excess kurtosis can be defined as : g_2 = \frac -3 = \frac - 3 where ''m''4 is the fourth sample moment about the mean, ''m''2 is the second sample moment about the mean (that is, the
sample variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
), ''x''''i'' is the ''i''th value, and \overline is the sample mean. This formula has the simpler representation, : g_2 = \frac \sum_^n z_i^4 - 3 where the z_i values are the standardized data values using the standard deviation defined using ''n'' rather than ''n'' − 1 in the denominator. For example, suppose the data values are 0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999. Then the z_i values are −0.239, −0.225, −0.221, −0.234, −0.230, −0.225, −0.239, −0.230, −0.234, −0.225, −0.230, −0.239, −0.230, −0.230, −0.225, −0.230, −0.216, −0.230, −0.225, 4.359 and the z_i^4 values are 0.003, 0.003, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.002, 0.003, 0.003, 360.976. The average of these values is 18.05 and the excess kurtosis is thus 18.05 − 3 = 15.05. This example makes it clear that data near the "middle" or "peak" of the distribution do not contribute to the kurtosis statistic, hence kurtosis does not measure "peakedness". It is simply a measure of the outlier, 999 in this example.


Standard unbiased estimator

Given a sub-set of samples from a population, the sample excess kurtosis g_2 above is a biased estimator of the population excess kurtosis. An alternative estimator of the population excess kurtosis, which is unbiased in random samples of a normal distribution, is defined as follows: : \begin G_2 & = \frac \\ pt& = \frac \; \frac \\ pt& = \frac \left n+1)\,\frac - 3\,(n-1) \right\\ pt& = \frac \left n+1)\,g_2 + 6 \right\\ pt& = \frac \; \frac - 3\,\frac \\ pt& = \frac \; \frac - 3\,\frac \end where ''k''4 is the unique symmetric unbiased estimator of the fourth cumulant, ''k''2 is the unbiased estimate of the second cumulant (identical to the unbiased estimate of the sample variance), ''m''4 is the fourth sample moment about the mean, ''m''2 is the second sample moment about the mean, ''x''''i'' is the ''i''th value, and \bar is the sample mean. This adjusted Fisher–Pearson standardized moment coefficient G_2 is the version found in Excel and several statistical packages including Minitab, SAS, and
SPSS SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. C ...
.Doane DP, Seward LE (2011) J Stat Educ 19 (2) Unfortunately, in nonnormal samples G_2 is itself generally biased.


Upper bound

An upper bound for the sample kurtosis of ''n'' (''n'' > 2) real numbers is : g_2 \le \frac \frac g_1^2 + \frac - 3. where g_1=m_3/m_2^ is the corresponding sample skewness.


Variance under normality

The variance of the sample kurtosis of a sample of size ''n'' from the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
is : \operatorname(g_2) = \frac Stated differently, under the assumption that the underlying random variable X is normally distributed, it can be shown that \sqrt g_2 \xrightarrow \mathcal(0, 24).


Applications

The sample kurtosis is a useful measure of whether there is a problem with outliers in a data set. Larger kurtosis indicates a more serious outlier problem, and may lead the researcher to choose alternative statistical methods. D'Agostino's K-squared test is a goodness-of-fit normality test based on a combination of the sample skewness and sample kurtosis, as is the Jarque–Bera test for normality. For non-normal samples, the variance of the sample variance depends on the kurtosis; for details, please see
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
. Pearson's definition of kurtosis is used as an indicator of intermittency in turbulence. It is also used in magnetic resonance imaging to quantify non-Gaussian diffusion. A concrete example is the following lemma by He, Zhang, and Zhang: Assume a random variable X has expectation E = \mu, variance E\left X - \mu)^2\right= \sigma^2 and kurtosis \kappa = \tfracE\left X - \mu)^4\right/math>. Assume we sample n = \tfrac\kappa\log\tfrac many independent copies. Then : \Pr\left max_^n X_i \le \mu\right\le \delta \quad\text\quad \Pr\left min_^n X_i \ge \mu\right\le \delta . This shows that with \Theta(\kappa\log\tfrac\delta) many samples, we will see one that is above the expectation with probability at least 1-\delta. In other words: If the kurtosis is large, we might see a lot values either all below or above the mean.


Kurtosis convergence

Applying band-pass filters to digital images, kurtosis values tend to be uniform, independent of the range of the filter. This behavior, termed ''kurtosis convergence'', can be used to detect image splicing in forensic analysis.


Other measures

A different measure of "kurtosis" is provided by using L-moments instead of the ordinary moments.


See also

* Kurtosis risk *
Maximum entropy probability distribution In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entrop ...


References


Further reading


Alternative source
(Comparison of kurtosis estimators) *


External links

*
Kurtosis calculator

Free Online Software (Calculator)
computes various types of skewness and kurtosis statistics for any dataset (includes small and large sample tests)..

on th


Celebrating 100 years of Kurtosis
a history of the topic, with different measures of kurtosis. {{Statistics, descriptive Moment (mathematics) Statistical deviation and dispersion