HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
-valued random variable. Like
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
, kurtosis describes a particular aspect of a probability distribution. There are different ways to quantify kurtosis for a theoretical distribution, and there are corresponding ways of estimating it using a sample from a population. Different measures of kurtosis may have different interpretations. The standard measure of a distribution's kurtosis, originating with Karl Pearson, is a scaled version of the fourth moment of the distribution. This number is related to the tails of the distribution, not its peak; hence, the sometimes-seen characterization of kurtosis as "
peakedness In probability theory and statistics, a shape parameter (also known as form parameter) is a kind of numerical parameter of a parametric family of probability distributionsEveritt B.S. (2002) Cambridge Dictionary of Statistics. 2nd Edition. CUP. t ...
" is incorrect. For this measure, higher kurtosis corresponds to greater extremity of deviations (or outliers), and not the configuration of data near the mean. It is common to compare the excess kurtosis (defined below) of a distribution to 0, which is the excess kurtosis of any univariate
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. Distributions with negative excess kurtosis are said to be ''platykurtic'', although this does not imply the distribution is "flat-topped" as is sometimes stated. Rather, it means the distribution produces fewer and/or less extreme outliers than the normal distribution. An example of a platykurtic distribution is the uniform distribution, which does not produce outliers. Distributions with a positive excess kurtosis are said to be ''leptokurtic''. An example of a leptokurtic distribution is the Laplace distribution, which has tails that asymptotically approach zero more slowly than a Gaussian, and therefore produces more outliers than the normal distribution. It is common practice to use excess kurtosis, which is defined as Pearson's kurtosis minus 3, to provide a simple comparison to the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. Some authors and software packages use "kurtosis" by itself to refer to the excess kurtosis. For clarity and generality, however, this article explicitly indicates where non-excess kurtosis is meant. Alternative measures of kurtosis are: the
L-kurtosis In statistics, L-moments are a sequence of statistics used to summarize the shape of a probability distribution. They are linear combinations of order statistics ( L-statistics) analogous to conventional moments, and can be used to calculate qu ...
, which is a scaled version of the fourth
L-moment In statistics, L-moments are a sequence of statistics used to summarize the shape of a probability distribution. They are linear combinations of order statistics ( L-statistics) analogous to conventional moments, and can be used to calculate qu ...
; measures based on four population or sample quantiles. These are analogous to the alternative measures of
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
that are not based on ordinary moments.


Pearson moments

The kurtosis is the fourth
standardized moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant ...
, defined as : \operatorname = \operatorname\left left(\frac\right)^4\right= \frac = \frac, where ''μ''4 is the fourth
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
and σ is the standard deviation. Several letters are used in the literature to denote the kurtosis. A very common choice is ''κ'', which is fine as long as it is clear that it does not refer to a
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
. Other choices include ''γ''2, to be similar to the notation for skewness, although sometimes this is instead reserved for the excess kurtosis. The kurtosis is bounded below by the squared
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
plus 1: : \frac \geq \left(\frac\right)^2 + 1, where ''μ''3 is the third
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
. The lower bound is realized by the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
. There is no upper limit to the kurtosis of a general probability distribution, and it may be infinite. A reason why some authors favor the excess kurtosis is that cumulants are extensive. Formulas related to the extensive property are more naturally expressed in terms of the excess kurtosis. For example, let ''X''1, ..., ''X''''n'' be independent random variables for which the fourth moment exists, and let ''Y'' be the random variable defined by the sum of the ''X''''i''. The excess kurtosis of ''Y'' is :\operatorname - 3 = \frac \sum_^n \sigma_i^ \cdot \left(\operatorname\left _i\right- 3\right), where \sigma_i is the standard deviation of X_i. In particular if all of the ''X''''i'' have the same variance, then this simplifies to :\operatorname - 3 = \sum_^n \left(\operatorname\left _i\right- 3\right). The reason not to subtract 3 is that the bare
fourth moment Fourth or the fourth may refer to: * the ordinal form of the number 4 * ''Fourth'' (album), by Soft Machine, 1971 * Fourth (angle), an ancient astronomical subdivision * Fourth (music), a musical interval * ''The Fourth'' (1972 film), a Sovie ...
better generalizes to
multivariate distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
s, especially when independence is not assumed. The
cokurtosis In probability theory and statistics, cokurtosis is a measure of how much two random variables change together. Cokurtosis is the fourth standardized cross central moment. If two random variables exhibit a high level of cokurtosis they will tend to ...
between pairs of variables is an order four
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
. For a bivariate normal distribution, the cokurtosis tensor has off-diagonal terms that are neither 0 nor 3 in general, so attempting to "correct" for an excess becomes confusing. It is true, however, that the joint cumulants of degree greater than two for any
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
are zero. For two random variables, ''X'' and ''Y'', not necessarily independent, the kurtosis of the sum, ''X'' + ''Y'', is : \begin \operatorname +Y= \big( & \sigma_X^4\operatorname + 4\sigma_X^3\sigma_Y\operatorname ,X,X,Y\\ & + 6\sigma_X^2\sigma_Y^2\operatorname ,X,Y,Y\\ pt& + 4\sigma_X\sigma_Y^3\operatorname ,Y,Y,Y+ \sigma_Y^4\operatorname \big). \end Note that the binomial coefficients appear in the above equation.


Interpretation

The exact interpretation of the Pearson measure of kurtosis (or excess kurtosis) used to be disputed, but is now settled. As Westfall notes in 2014, ''"...its only unambiguous interpretation is in terms of tail extremity; i.e., either existing outliers (for the sample kurtosis) or propensity to produce outliers (for the kurtosis of a probability distribution)."'' The logic is simple: Kurtosis is the average (or expected value) of the standardized data raised to the fourth power. Standardized values that are less than 1 (i.e., data within one standard deviation of the mean, where the "peak" would be) contribute virtually nothing to kurtosis, since raising a number that is less than 1 to the fourth power makes it closer to zero. The only data values (observed or observable) that contribute to kurtosis in any meaningful way are those outside the region of the peak; i.e., the outliers. Therefore, kurtosis measures outliers only; it measures nothing about the "peak". Many incorrect interpretations of kurtosis that involve notions of peakedness have been given. One is that kurtosis measures both the "peakedness" of the distribution and the heaviness of its tail. Various other incorrect interpretations have been suggested, such as "lack of shoulders" (where the "shoulder" is defined vaguely as the area between the peak and the tail, or more specifically as the area about one standard deviation from the mean) or "bimodality". Balanda and MacGillivray assert that the standard definition of kurtosis "is a poor measure of the kurtosis, peakedness, or tail weight of a distribution" and instead propose to "define kurtosis vaguely as the location- and scale-free movement of probability mass from the shoulders of a distribution into its center and tails".


Moors' interpretation

In 1986 Moors gave an interpretation of kurtosis. Let : Z = \frac \sigma, where ''X'' is a random variable, ''μ'' is the mean and ''σ'' is the standard deviation. Now by definition of the kurtosis \kappa , and by the well-known identity E\left ^2\right= \operatorname + [V^2,_ :\kappa_= __E\left[_Z^4_\right.html" ;"title=".html" ;"title="[V">[V^2, :\kappa = E\left[ Z^4 \right">.html" ;"title="[V">[V^2, :\kappa = E\left[ Z^4 \right= \operatorname\left[ Z^2 \right] + \left[E\left[Z^2\right]\right]^2 = \operatorname\left[ Z^2 \right] + [\operatorname[Z^2 = \operatorname\left[ Z^2 \right] + 1 . The kurtosis can now be seen as a measure of the dispersion of ''Z''2 around its expectation. Alternatively it can be seen to be a measure of the dispersion of ''Z'' around +1 and −1. ''κ'' attains its minimal value in a symmetric two-point distribution. In terms of the original variable ''X'', the kurtosis is a measure of the dispersion of ''X'' around the two values ''μ'' ± ''σ''. High values of ''κ'' arise in two circumstances: * where the probability mass is concentrated around the mean and the data-generating process produces occasional values far from the mean, * where the probability mass is concentrated in the tails of the distribution.


Excess kurtosis

The ''excess kurtosis'' is defined as kurtosis minus 3. There are 3 distinct regimes as described below.


Mesokurtic

Distributions with zero excess kurtosis are called mesokurtic, or mesokurtotic. The most prominent example of a mesokurtic distribution is the normal distribution family, regardless of the values of its
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s. A few other well-known distributions can be mesokurtic, depending on parameter values: for example, the binomial distribution is mesokurtic for p = 1/2 \pm \sqrt.


Leptokurtic

A distribution with
positive Positive is a property of positivity and may refer to: Mathematics and science * Positive formula, a logical formula not containing negation * Positive number, a number that is greater than 0 * Plus sign, the sign "+" used to indicate a posi ...
excess kurtosis is called leptokurtic, or leptokurtotic. "Lepto-" means "slender". In terms of shape, a leptokurtic distribution has '' fatter tails''. Examples of leptokurtic distributions include the Student's t-distribution,
Rayleigh distribution In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribut ...
, Laplace distribution, exponential distribution,
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
and the
logistic distribution Logistic may refer to: Mathematics * Logistic function, a sigmoid function used in many fields ** Logistic map, a recurrence relation that sometimes exhibits chaos ** Logistic regression, a statistical model using the logistic function ** Logit, ...
. Such distributions are sometimes termed ''super-Gaussian''.


Platykurtic

A distribution with negative excess kurtosis is called platykurtic, or platykurtotic. "Platy-" means "broad". In terms of shape, a platykurtic distribution has ''thinner tails''. Examples of platykurtic distributions include the
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
and discrete uniform distributions, and the
raised cosine distribution In probability theory and statistics, the raised cosine distribution is a continuous probability distribution supported on the interval mu-s,\mu+s/math>. The probability density function (PDF) is :f(x;\mu,s)=\frac \left +\cos\left(\frac\,\pi\rig ...
. The most platykurtic distribution of all is the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
with ''p'' = 1/2 (for example the number of times one obtains "heads" when flipping a coin once, a
coin toss A coin is a small, flat (usually depending on the country or value), round piece of metal or plastic used primarily as a medium of exchange or legal tender. They are standardized in weight, and produced in large quantities at a mint in order to ...
), for which the excess kurtosis is −2.


Graphical examples


The Pearson type VII family

The effects of kurtosis are illustrated using a
parametric family In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters. Common examples are parametrized (fam ...
of distributions whose kurtosis can be adjusted while their lower-order moments and cumulants remain constant. Consider the Pearson type VII family, which is a special case of the Pearson type IV family restricted to symmetric densities. The
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
is given by :f(x; a, m) = \frac \left +\left(\frac\right)^2 \right, \! where ''a'' is a
scale parameter In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family o ...
and ''m'' is a
shape parameter In probability theory and statistics, a shape parameter (also known as form parameter) is a kind of numerical parameter of a parametric family of probability distributionsEveritt B.S. (2002) Cambridge Dictionary of Statistics. 2nd Edition. CUP. t ...
. All densities in this family are symmetric. The ''k''th moment exists provided ''m'' > (''k'' + 1)/2. For the kurtosis to exist, we require ''m'' > 5/2. Then the mean and
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
exist and are both identically zero. Setting ''a''2 = 2''m'' − 3 makes the variance equal to unity. Then the only free parameter is ''m'', which controls the fourth moment (and cumulant) and hence the kurtosis. One can reparameterize with m = 5/2 + 3/\gamma_2, where \gamma_2 is the excess kurtosis as defined above. This yields a one-parameter leptokurtic family with zero mean, unit variance, zero skewness, and arbitrary non-negative excess kurtosis. The reparameterized density is :g(x; \gamma_2) = f\left(x;\; a = \sqrt,\; m = \frac + \frac\right). \! In the limit as \gamma_2 \to \infty one obtains the density :g(x) = 3\left(2 + x^2\right)^, \! which is shown as the red curve in the images on the right. In the other direction as \gamma_2 \to 0 one obtains the standard normal density as the limiting distribution, shown as the black curve. In the images on the right, the blue curve represents the density x \mapsto g(x; 2) with excess kurtosis of 2. The top image shows that leptokurtic densities in this family have a higher peak than the mesokurtic normal density, although this conclusion is only valid for this select family of distributions. The comparatively fatter tails of the leptokurtic densities are illustrated in the second image, which plots the natural logarithm of the Pearson type VII densities: the black curve is the logarithm of the standard normal density, which is a
parabola In mathematics, a parabola is a plane curve which is Reflection symmetry, mirror-symmetrical and is approximately U-shaped. It fits several superficially different Mathematics, mathematical descriptions, which can all be proved to define exact ...
. One can see that the normal density allocates little probability mass to the regions far from the mean ("has thin tails"), compared with the blue curve of the leptokurtic Pearson type VII density with excess kurtosis of 2. Between the blue curve and the black are other Pearson type VII densities with ''γ''2 = 1, 1/2, 1/4, 1/8, and 1/16. The red curve again shows the upper limit of the Pearson type VII family, with \gamma_2 = \infty (which, strictly speaking, means that the fourth moment does not exist). The red curve decreases the slowest as one moves outward from the origin ("has fat tails").


Other well-known distributions

Several well-known, unimodal, and symmetric distributions from different parametric families are compared here. Each has a mean and skewness of zero. The parameters have been chosen to result in a variance equal to 1 in each case. The images on the right show curves for the following seven densities, on a
linear scale A linear scale, also called a bar scale, scale bar, graphic scale, or graphical scale, is a means of visually showing the scale of a map, nautical chart, engineering drawing, or architectural drawing. A scale bar is common element of map lay ...
and logarithmic scale: * D: Laplace distribution, also known as the double exponential distribution, red curve (two straight lines in the log-scale plot), excess kurtosis = 3 * S:
hyperbolic secant distribution In probability theory and statistics, the hyperbolic secant distribution is a continuous probability distribution whose probability density function and characteristic function are proportional to the hyperbolic secant function. The hyperbolic sec ...
, orange curve, excess kurtosis = 2 * L:
logistic distribution Logistic may refer to: Mathematics * Logistic function, a sigmoid function used in many fields ** Logistic map, a recurrence relation that sometimes exhibits chaos ** Logistic regression, a statistical model using the logistic function ** Logit, ...
, green curve, excess kurtosis = 1.2 * N:
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, black curve (inverted parabola in the log-scale plot), excess kurtosis = 0 * C:
raised cosine distribution In probability theory and statistics, the raised cosine distribution is a continuous probability distribution supported on the interval mu-s,\mu+s/math>. The probability density function (PDF) is :f(x;\mu,s)=\frac \left +\cos\left(\frac\,\pi\rig ...
, cyan curve, excess kurtosis = −0.593762... * W:
Wigner semicircle distribution The Wigner semicircle distribution, named after the physicist Eugene Wigner, is the probability distribution on minus;''R'', ''R''whose probability density function ''f'' is a scaled semicircle (i.e., a semi-ellipse) centered at (0, 0): :f(x)=\sq ...
, blue curve, excess kurtosis = −1 * U: uniform distribution, magenta curve (shown for clarity as a rectangle in both images), excess kurtosis = −1.2. Note that in these cases the platykurtic densities have bounded support, whereas the densities with positive or zero excess kurtosis are supported on the whole real line. One cannot infer that high or low kurtosis distributions have the characteristics indicated by these examples. There exist platykurtic densities with infinite support, *e.g.,
exponential power distribution The generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To dis ...
s with sufficiently large shape parameter ''b'' and there exist leptokurtic densities with finite support. *e.g., a distribution that is uniform between −3 and −0.3, between −0.3 and 0.3, and between 0.3 and 3, with the same density in the (−3, −0.3) and (0.3, 3) intervals, but with 20 times more density in the (−0.3, 0.3) interval Also, there exist platykurtic densities with infinite peakedness, *e.g., an equal mixture of the
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
with parameters 0.5 and 1 with its reflection about 0.0 and there exist leptokurtic densities that appear flat-topped, *e.g., a mixture of distribution that is uniform between -1 and 1 with a T(4.0000001) Student's t-distribution, with mixing probabilities 0.999 and 0.001.


Sample kurtosis


Definitions


A natural but biased estimator

For a
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of s ...
of ''n'' values, a method of moments estimator of the population excess kurtosis can be defined as : g_2 = \frac -3 = \frac - 3 where ''m''4 is the fourth sample moment about the mean, ''m''2 is the second sample moment about the mean (that is, the
sample variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
), ''x''''i'' is the ''i''th value, and \overline is the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger popu ...
. This formula has the simpler representation, : g_2 = \frac \sum_^n z_i^4 - 3 where the z_i values are the standardized data values using the standard deviation defined using ''n'' rather than ''n'' − 1 in the denominator. For example, suppose the data values are 0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999. Then the z_i values are −0.239, −0.225, −0.221, −0.234, −0.230, −0.225, −0.239, −0.230, −0.234, −0.225, −0.230, −0.239, −0.230, −0.230, −0.225, −0.230, −0.216, −0.230, −0.225, 4.359 and the z_i^4 values are 0.003, 0.003, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.002, 0.003, 0.003, 360.976. The average of these values is 18.05 and the excess kurtosis is thus 18.05 − 3 = 15.05. This example makes it clear that data near the "middle" or "peak" of the distribution do not contribute to the kurtosis statistic, hence kurtosis does not measure "peakedness". It is simply a measure of the outlier, 999 in this example.


Standard unbiased estimator

Given a sub-set of samples from a population, the sample excess kurtosis g_2 above is a
biased estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In st ...
of the population excess kurtosis. An alternative estimator of the population excess kurtosis, which is unbiased in random samples of a normal distribution, is defined as follows: : \begin G_2 & = \frac \\ pt& = \frac \; \frac \\ pt& = \frac \left n+1)\,\frac - 3\,(n-1) \right\\ pt& = \frac \left n+1)\,g_2 + 6 \right\\ pt& = \frac \; \frac - 3\,\frac \\ pt& = \frac \; \frac - 3\,\frac \end where ''k''4 is the unique symmetric
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, ...
estimator of the fourth
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
, ''k''2 is the unbiased estimate of the second cumulant (identical to the unbiased estimate of the sample variance), ''m''4 is the fourth sample moment about the mean, ''m''2 is the second sample moment about the mean, ''x''''i'' is the ''i''th value, and \bar is the sample mean. This adjusted Fisher–Pearson standardized moment coefficient G_2 is the version found in
Excel ExCeL London (an abbreviation for Exhibition Centre London) is an exhibition centre, international convention centre and former hospital in the Custom House area of Newham, East London. It is situated on a site on the northern quay of the ...
and several statistical packages including
Minitab Minitab is a statistics package developed at the Pennsylvania State University by researchers Barbara F. Ryan, Thomas A. Ryan, Jr., and Brian L. Joiner in conjunction with Triola Statistics Company in 1972. It began as a light version of OMNITA ...
, SAS, and
SPSS SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. C ...
.Doane DP, Seward LE (2011) J Stat Educ 19 (2) Unfortunately, in nonnormal samples G_2 is itself generally biased.


Upper bound

An upper bound for the sample kurtosis of ''n'' (''n'' > 2) real numbers is : g_2 \le \frac \frac g_1^2 + \frac - 3. where g_1=m_3/m_2^ is the corresponding sample skewness.


Variance under normality

The variance of the sample kurtosis of a sample of size ''n'' from the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
is : \operatorname(g_2) = \frac Stated differently, under the assumption that the underlying random variable X is normally distributed, it can be shown that \sqrt g_2 \xrightarrow \mathcal(0, 24).


Applications

The sample kurtosis is a useful measure of whether there is a problem with outliers in a data set. Larger kurtosis indicates a more serious outlier problem, and may lead the researcher to choose alternative statistical methods.
D'Agostino's K-squared test In statistics, D'Agostino's ''K''2 test, named for Ralph D'Agostino, is a goodness-of-fit measure of departure from normality, that is the test aims to gauge the compatibility of given data with the null hypothesis that the data is a realizatio ...
is a
goodness-of-fit The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measure ...
normality test In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed. More precisely, the tests are a fo ...
based on a combination of the sample skewness and sample kurtosis, as is the
Jarque–Bera test In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution. The test is named after Carlos Jarque and Anil K. Bera. The test statistic is always nonnegativ ...
for normality. For non-normal samples, the variance of the sample variance depends on the kurtosis; for details, please see
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
. Pearson's definition of kurtosis is used as an indicator of intermittency in
turbulence In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow, which occurs when a fluid flows in parallel layers, with no disruption between ...
. It is also used in magnetic resonance imaging to quantify non-Gaussian diffusion. A concrete example is the following lemma by He, Zhang, and Zhang: Assume a random variable X has expectation E = \mu, variance E\left X - \mu)^2\right= \sigma^2 and kurtosis \kappa = \tfracE\left X - \mu)^4\right/math>. Assume we sample n = \tfrac\kappa\log\tfrac many independent copies. Then : \Pr\left max_^n X_i \le \mu\right\le \delta \quad\text\quad \Pr\left min_^n X_i \ge \mu\right\le \delta . This shows that with \Theta(\kappa\log\tfrac\delta) many samples, we will see one that is above the expectation with probability at least 1-\delta. In other words: If the kurtosis is large, we might see a lot values either all below or above the mean.


Kurtosis convergence

Applying
band-pass filter A band-pass filter or bandpass filter (BPF) is a device that passes frequencies within a certain range and rejects (attenuates) frequencies outside that range. Description In electronics and signal processing, a filter is usually a two-port ...
s to digital images, kurtosis values tend to be uniform, independent of the range of the filter. This behavior, termed ''kurtosis convergence'', can be used to detect image splicing in
forensic analysis Forensic science, also known as criminalistics, is the application of science to criminal and civil laws, mainly—on the criminal side—during criminal investigation, as governed by the legal standards of admissible evidence and criminal p ...
.


Other measures

A different measure of "kurtosis" is provided by using
L-moment In statistics, L-moments are a sequence of statistics used to summarize the shape of a probability distribution. They are linear combinations of order statistics ( L-statistics) analogous to conventional moments, and can be used to calculate qu ...
s instead of the ordinary moments.


See also

*
Kurtosis risk In statistics and decision theory, kurtosis risk is the risk that results when a statistical model assumes the normal distribution, but is applied to observations that have a tendency to occasionally be much farther (in terms of number of standar ...
*
Maximum entropy probability distribution In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entro ...


References


Further reading


Alternative source
(Comparison of kurtosis estimators) *


External links

*
Kurtosis calculator

Free Online Software (Calculator)
computes various types of skewness and kurtosis statistics for any dataset (includes small and large sample tests)..

on th


Celebrating 100 years of Kurtosis
a history of the topic, with different measures of kurtosis. {{Statistics, descriptive Moment (mathematics) Statistical deviation and dispersion