HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, skewness is a measure of the asymmetry of the
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
of a
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
about its mean. The skewness value can be positive, zero, negative, or undefined. For a
unimodal In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. Unimodal probability distribution In statistics, a unimodal p ...
distribution, negative skew commonly indicates that the ''tail'' is on the left side of the distribution, and positive skew indicates that the tail is on the right. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value means that the tails on both sides of the mean balance out overall; this is the case for a symmetric distribution, but can also be true for an asymmetric distribution where one tail is long and thin, and the other is short but fat.


Introduction

Consider the two distributions in the figure just below. Within each graph, the values on the right side of the distribution taper differently from the values on the left side. These tapering sides are called ''tails'', and they provide a visual means to determine which of the two kinds of skewness a distribution has: # ': The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be ''left-skewed'', ''left-tailed'', or ''skewed to the left'', despite the fact that the curve itself appears to be skewed or leaning to the right; ''left'' instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a ''right-leaning'' curve. # ': The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be ''right-skewed'', ''right-tailed'', or ''skewed to the right'', despite the fact that the curve itself appears to be skewed or leaning to the left; ''right'' instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a ''left-leaning'' curve. Skewness in a data series may sometimes be observed not only graphically but by simple inspection of the values. For instance, consider the numeric sequence (49, 50, 51), whose values are evenly distributed around a central value of 50. We can transform this sequence into a negatively skewed distribution by adding a value far below the mean, which is probably a negative outlier, e.g. (40, 49, 50, 51). Therefore, the mean of the sequence becomes 47.5, and the median is 49.5. Based on the formula of
nonparametric skew In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values.Arnold BC, Groeneveld RA (1995) Measuring skewness with respect to the mode. The American Statistician 4 ...
, defined as (\mu - \nu)/\sigma, the skew is negative. Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5. As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness.


Relationship of mean and median

The skewness is not directly related to the relationship between the mean and median: a distribution with negative skew can have its mean greater than or less than the median, and likewise for positive skew. In the older notion of
nonparametric skew In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values.Arnold BC, Groeneveld RA (1995) Measuring skewness with respect to the mode. The American Statistician 4 ...
, defined as (\mu - \nu)/\sigma, where \mu is the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
, \nu is the
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
, and \sigma is the
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not always have the same sign: while they agree for some families of distributions, they differ in some of the cases, and conflating them is misleading. If the distribution is
symmetric Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ...
, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and
unimodal In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. Unimodal probability distribution In statistics, a unimodal p ...
, then the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
=
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
=
mode Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is ...
. This is the case of a coin toss or the series 1,2,3,4,... Note, however, that the converse is not true in general, i.e. zero skewness (defined below) does not imply that the mean is equal to the median. A 2005 journal article points out:
Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in
multimodal distribution In statistics, a multimodal distribution is a probability distribution with more than one mode. These appear as distinct peaks (local maxima) in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and dis ...
s, or in distributions where one tail is
long Long may refer to: Measurement * Long, characteristic of something of great duration * Long, characteristic of something of great length * Longitude (abbreviation: long.), a geographic coordinate * Longa (music), note value in early music mens ...
but the other is
heavy Heavy may refer to: Measures * Heavy (aeronautics), a term used by pilots and air traffic controllers to refer to aircraft capable of 300,000 lbs or more takeoff weight * Heavy, a characterization of objects with substantial weight * Heavy, ...
. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median.
For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed.


Definition


Fisher's moment coefficient of skewness

The skewness of a random variable ''X'' is the third
standardized moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant ...
\tilde_3, defined as: : \tilde_3 = \operatorname\left left(\frac\right)^3 \right = \frac = \frac = \frac where ''μ'' is the mean, ''σ'' is the
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
, E is the
expectation operator In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
, ''μ''3 is the third
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
, and ''κ''''t'' are the ''t''-th
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
s. It is sometimes referred to as Pearson's moment coefficient of skewness,Pearson's moment coefficient of skewness
FXSolver.com
or simply the moment coefficient of skewness,
2008–2016 by Stan Brown, Oak Road Systems
but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant ''κ''3 to the 1.5th power of the second cumulant ''κ''2. This is analogous to the definition of
kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
as the fourth cumulant normalized by the square of the second cumulant. The skewness is also sometimes denoted Skew 'X'' If ''σ'' is finite, ''μ'' is finite too and skewness can be expressed in terms of the non-central moment E 'X''3by expanding the previous formula, : \begin \tilde_3 &= \operatorname\left left(\frac\right)^3 \right\\ &= \frac\\ &= \frac\\ &= \frac. \end


Examples

Skewness can be infinite, as when :\Pr \left X > x \rightx^\mboxx>1,\ \Pr <10 where the third cumulants are infinite, or as when :\Pr (1-x)^/2\mboxx\mbox\Pr >x(1+x)^/2\mboxx. where the third cumulant is undefined. Examples of distributions with finite skewness include the following. * A
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
and any other symmetric distribution with finite third moment has a skewness of 0 * A
half-normal distribution In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution. Let X follow an ordinary normal distribution, N(0,\sigma^2). Then, Y=, X, follows a half-normal distribution. Thus, the hal ...
has a skewness just below 1 * An
exponential distribution In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average ...
has a skewness of 2 * A
lognormal distribution In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a normal ...
can have a skewness of any positive value, depending on its parameters


Sample skewness

For a sample of ''n'' values, two natural estimators of the population skewness are : b_1 = \frac = \frac and : g_1 = \frac = \frac, where \overline is the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a Sample (statistics), sample of data on one or more random variables. The sample mean is the average value (or mean, mean value) of a sample (statistic ...
, ''s'' is the
sample standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
, ''m''2 is the (biased) sample second central moment, and ''m''3 is the sample third central moment. g_1 is a method of moments estimator. Another common definition of the ''sample skewness'' isDoane, David P., and Lori E. Seward
"Measuring skewness: a forgotten statistic."
Journal of Statistics Education 19.2 (2011): 1-18. (Page 7)
: \begin G_1 & = \frac = \frac\; b_1 = \frac\; g_1, \\ \end where k_3 is the unique symmetric unbiased estimator of the third
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
and k_2 = s^2 is the symmetric unbiased estimator of the second cumulant (i.e. the
sample variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
). This adjusted Fisher–Pearson standardized moment coefficient G_1 is the version found in
Excel ExCeL London (an abbreviation for Exhibition Centre London) is an exhibition centre, international convention centre and former hospital in the Custom House area of Newham, East London. It is situated on a site on the northern quay of the ...
and several statistical packages including
Minitab Minitab is a statistics package developed at the Pennsylvania State University by researchers Barbara F. Ryan, Thomas A. Ryan, Jr., and Brian L. Joiner in conjunction with Triola Statistics Company in 1972. It began as a light version of OMNITA ...
, SAS and
SPSS SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. C ...
. Under the assumption that the underlying random variable X is normally distributed, it can be shown that all three ratios b_1, g_1 and G_1 are unbiased and
consistent In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent i ...
estimators of the population skewness \gamma_1=0, with \sqrt b_1 \xrightarrow N(0, 6), i.e., their distributions converge to a normal distribution with mean 0 and variance 6 (
Fisher Fisher is an archaic term for a fisherman, revived as gender-neutral. Fisher, Fishers or The Fisher may also refer to: Places Australia *Division of Fisher, an electoral district in the Australian House of Representatives, in Queensland *Elect ...
, 1930). The variance of the sample skewness is thus approximately 6/n for sufficiently large samples. More precisely, in a random sample of size ''n'' from a normal distribution,Duncan Cramer (1997) Fundamental Statistics for Social Research. Routledge. (p 85) : \operatorname(G_1)= \frac . In normal samples, b_1 has the smaller variance of the three estimators, with : \operatorname(b_1) < \operatorname (g_1) < \operatorname(G_1). For non-normal distributions, b_1, g_1 and G_1 are generally biased estimators of the population skewness \gamma_1; their expected values can even have the opposite sign from the true skewness. For instance, a mixed distribution consisting of very thin Gaussians centred at −99, 0.5, and 2 with weights 0.01, 0.66, and 0.33 has a skewness \gamma_1 of about −9.77, but in a sample of 3 G_1 has an expected value of about 0.32, since usually all three samples are in the positive-valued part of the distribution, which is skewed the other way.


Applications

Skewness is a descriptive statistic that can be used in conjunction with the histogram and the normal quantile plot to characterize the data or distribution. Skewness indicates the direction and relative magnitude of a distribution's deviation from the normal distribution. With pronounced skewness, standard statistical inference procedures such as a
confidence interval In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
for a mean will be not only incorrect, in the sense that the true coverage level will differ from the nominal (e.g., 95%) level, but they will also result in unequal error probabilities on each side. Skewness can be used to obtain approximate probabilities and quantiles of distributions (such as
value at risk Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by ...
in finance) via the Cornish-Fisher expansion. Many models assume normal distribution; i.e., data are symmetric about the mean. The normal distribution has a skewness of zero. But in reality, data points may not be perfectly symmetric. So, an understanding of the skewness of the dataset indicates whether deviations from the mean are going to be positive or negative.
D'Agostino's K-squared test In statistics, D'Agostino's ''K''2 test, named for Ralph D'Agostino, is a goodness-of-fit measure of departure from normality, that is the test aims to gauge the compatibility of given data with the null hypothesis that the data is a realizatio ...
is a
goodness-of-fit The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measure ...
normality test In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed. More precisely, the tests are a fo ...
based on sample skewness and sample kurtosis.


Other measures of skewness

Other measures of skewness have been used, including simpler calculations suggested by Karl Pearson (not to be confused with Pearson's moment coefficient of skewness, see above). These other measures are:


Pearson's first skewness coefficient (mode skewness)

The Pearson mode skewness, or first skewness coefficient, is defined as : .


Pearson's second skewness coefficient (median skewness)

The Pearson median skewness, or second skewness coefficient, is defined as : . Which is a simple multiple of the
nonparametric skew In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values.Arnold BC, Groeneveld RA (1995) Measuring skewness with respect to the mode. The American Statistician 4 ...
. Worth noticing that, since skewness is not related to an order relationship between mode, mean and median, the sign of these coefficients does not give information about the type of skewness (left/right).


Quantile-based measures

Bowley's measure of skewness (from 1901),Kenney JF and Keeping ES (1962) ''Mathematics of Statistics, Pt. 1, 3rd ed.'', Van Nostrand, (page 102). also called Yule's coefficient (from 1912) is defined as: :\frac =\frac, where ''Q'' is the quantile function (i.e., the inverse of the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
). The numerator is difference between the average of the upper and lower quartiles (a measure of location) and the median (another measure of location), while the denominator is the semi-interquartile range ((3/4)}-{{Q}(1/4))/2, which for symmetric distributions is the MAD measure of
dispersion Dispersion may refer to: Economics and finance * Dispersion (finance), a measure for the statistical distribution of portfolio returns * Price dispersion, a variation in prices across sellers of the same item *Wage dispersion, the amount of variat ...
. Other names for this measure are Galton's measure of skewness,{{harvp, Johnson, NL, Kotz, S, Balakrishnan, N, 1994 p. 3 and p. 40 the Yule–Kendall indexWilks DS (1995) ''Statistical Methods in the Atmospheric Sciences'', p 27. Academic Press. {{isbn, 0-12-751965-3 and the quartile skewness, Similarly, Kelly's measure of skewness is defined as :\frac{{{Q}(9/10)}+{{Q}(1/10)}-2{{Q}(1/2){{{Q}(9/10)}-{{Q}(1/10). A more general formulation of a skewness function was described by Groeneveld, R. A. and Meeden, G. (1984):{{Cite journal , doi = 10.2307/2987742 , last1 = Groeneveld , first1 = R.A. , last2 = Meeden , first2 = G. , year = 1984 , title = Measuring Skewness and Kurtosis , jstor = 2987742, journal = The Statistician , volume = 33 , issue = 4, pages = 391–399 MacGillivray (1992)Hinkley DV (1975) "On power transformations to symmetry", '' Biometrika, 62, 101–111 : \gamma( u )= \frac{ Q( u ) +Q( 1 - u )-2Q( 1 / 2 ) }{Q( u ) -Q( 1 - u ) } The function ''γ''(''u'') satisfies −1 ≤ ''γ''(''u'') ≤ 1 and is well defined without requiring the existence of any moments of the distribution. Bowley's measure of skewness is γ(''u'') evaluated at ''u'' = 3/4 while Kelly's measure of skewness is γ(''u'') evaluated at ''u'' = 9/10. This definition leads to a corresponding overall measure of skewnessMacGillivray (1992) defined as the
supremum In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set P is a greatest element in P that is less than or equal to each element of S, if such an element exists. Consequently, the term ''greatest l ...
of this over the range 1/2 ≤ ''u'' < 1. Another measure can be obtained by integrating the numerator and denominator of this expression. Quantile-based skewness measures are at first glance easy to interpret, but they often show significantly larger sample variations than moment-based methods. This means that often samples from a symmetric distribution (like the uniform distribution) have a large quantile-based skewness, just by chance.


Groeneveld and Meeden's coefficient

Groeneveld and Meeden have suggested, as an alternative measure of skewness, : \mathrm{skew}(X) = \frac{( \mu - \nu ) }{ E( , X - \nu , ) }, where ''μ'' is the mean, ''ν'' is the median, , ..., is the
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), an ...
, and ''E''() is the expectation operator. This is closely related in form to Pearson's second skewness coefficient.


L-moments

Use of
L-moment In statistics, L-moments are a sequence of statistics used to summarize the shape of a probability distribution. They are linear combinations of order statistics ( L-statistics) analogous to conventional moments, and can be used to calculate qu ...
s in place of moments provides a measure of skewness known as the L-skewness.{{cite journal , last=Hosking , first= J.R.M. , year=1992 , title=Moments or L moments? An example comparing two measures of distributional shape , journal=The American Statistician , volume=46 , number=3 , pages=186–189 , jstor=2685210 , doi=10.2307/2685210


Distance skewness

A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew. If ''X'' is a random variable taking values in the ''d''-dimensional Euclidean space, ''X'' has finite expectation, ''X''{{' is an independent identically distributed copy of ''X'', and \, \cdot\, denotes the norm in the Euclidean space, then a simple ''measure of asymmetry'' with respect to location parameter θ is : \operatorname{dSkew}(X) := 1 - \frac{\operatorname{E}\, X-X'\{\operatorname{E}\, X+X'-2 \theta\ \text{ if } \Pr(X=\theta)\ne 1 and dSkew(''X'') := 0 for ''X'' = θ (with probability 1). Distance skewness is always between 0 and 1, equals 0 if and only if ''X'' is diagonally symmetric with respect to θ (''X'' and 2θ−''X'' have the same probability distribution) and equals 1 if and only if X is a constant ''c'' (c \neq \theta) with probability one.Szekely, G. J. and Mori, T. F. (2001) "A characteristic measure of asymmetry and its application for testing diagonal symmetry", ''Communications in Statistics – Theory and Methods'' 30/8&9, 1633–1639. Thus there is a simple consistent
statistical test A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
of diagonal symmetry based on the sample distance skewness: : \operatorname{dSkew}_n(X):= 1 - \frac{\sum_{i,j} \, x_i-x_j\, }{\sum_{i,j} \, x_i+x_j-2\theta \.


Medcouple

The
medcouple In statistics, the medcouple is a robust statistic that measures the skewness of a univariate distribution. It is defined as a scaled median difference between the left and right half of a distribution. Its robustness makes it suitable for identi ...
is a scale-invariant robust measure of skewness, with a
breakdown point Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such ...
of 25%.{{cite journal , author=G. Brys , author2=M. Hubert, author2-link=Mia Hubert , author3=A. Struyf , date=November 2004 , title=A Robust Measure of Skewness , journal=Journal of Computational and Graphical Statistics , volume=13 , issue=4 , pages=996–1017 , doi=10.1198/106186004X12632, s2cid=120919149 It is the
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
of the values of the kernel function : h(x_i, x_j) = \frac{ (x_i - x_m) - (x_m - x_j)}{x_i - x_j} taken over all couples (x_i, x_j) such that x_i \geq x_m \geq x_j, where x_m is the median of the
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of s ...
\{x_1, x_2, \ldots, x_n\}. It can be seen as the median of all possible quantile skewness measures.


See also

{{Portal, Mathematics * Bragg peak *
Coskewness In probability theory and statistics, coskewness is a measure of how much three random variables change together. Coskewness is the third standardized cross central moment, related to skewness as covariance is related to variance. In 1976, Krauss an ...
*
Kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
*
Shape parameter In probability theory and statistics, a shape parameter (also known as form parameter) is a kind of numerical parameter of a parametric family of probability distributionsEveritt B.S. (2002) Cambridge Dictionary of Statistics. 2nd Edition. CUP. ...
s *
Skew normal distribution In probability theory and statistics, the skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for non-zero skewness. Definition Let \phi(x) denote the standard normal probability d ...
*
Skewness risk Skewness risk in financial modeling is the risk that results when observations are not spread symmetrically around an average value, but instead have a skewed distribution. As a result, the mean and the median can be different. Skewness risk ca ...


References


Citations

{{Reflist


Sources

{{refbegin * {{cite book, author1=Johnson, NL, author2=Kotz, S, author3=Balakrishnan, N, year=1994, title=Continuous Univariate Distributions, volume=1, edition=2, publisher=Wiley, isbn=0-471-58495-9 * {{cite journal , last1 = MacGillivray , first1 = HL , year = 1992 , title = Shape properties of the g- and h- and Johnson families , journal = Communications in Statistics - Theory and Methods , volume = 21 , issue = 5, pages = 1244–1250 , doi = 10.1080/03610929208830842 * Premaratne, G., Bera, A. K. (2001). Adjusting the Tests for Skewness and Kurtosis for Distributional Misspecifications. Working Paper Number 01-0116, University of Illinois. Forthcoming in Comm in Statistics, Simulation and Computation. 2016 1-15 * Premaratne, G., Bera, A. K. (2000). Modeling Asymmetry and Excess Kurtosis in Stock Return Data. Office of Research Working Paper Number 00-0123, University of Illinois.
Skewness Measures for the Weibull Distribution
{{refend


External links

{{Wikiversity {{Commons category, Skewness (statistics) *{{springer, title=Asymmetry coefficient, id=p/a013590

by Michel Petitjean
On More Robust Estimation of Skewness and Kurtosis
Comparison of skew estimators by Kim and White.

{{- {{Statistics, descriptive {{Theory_of_probability_distributions Moment (mathematics) Statistical deviation and dispersion