In statistics, robust measures of scale are methods which quantify the
statistical dispersion
In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartil ...
in a
sample of
numerical data while resisting
outliers
In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter ar ...
. These are contrasted with conventional or non-robust measures of scale, such as sample
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
, which are greatly influenced by outliers.
The most common such
robust statistics
Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust Statistics, statistical methods have been developed for many common problems, such as estimating location parame ...
are the ''
interquartile range
In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the differen ...
'' (IQR) and the ''
median absolute deviation
In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.
For a u ...
'' (MAD). Alternatives robust estimators have also been developed, such as those based on pairwise differences and biweight midvariance.
These robust statistics are particularly used as
estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
s of a
scale parameter
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution.
Definition
If a family ...
, and have the advantages of both robustness and superior
efficiency
Efficiency is the often measurable ability to avoid making mistakes or wasting materials, energy, efforts, money, and time while performing a task. In a more general sense, it is the ability to do things well, successfully, and without waste.
...
on contaminated data, at the cost of inferior efficiency on clean data from distributions such as the normal distribution. To illustrate robustness, the standard deviation can be made arbitrarily large by increasing exactly one observation (it has a
breakdown point
Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regr ...
of 0, as it can be contaminated by a single point), a defect that is not shared by robust statistics.
Note that, in domains such as finance, the assumption of normality may lead to excessive risk exposure, and that further parameterization may be needed to mitigate
risks presented by abnormal kurtosis.
Approaches to estimation
Robust measures of scale can be used as
estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
s of properties of the population, either for
parameter estimation or as estimators of their own
expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
.
For example, robust estimators of scale are used to estimate the population
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
, generally by multiplying by a
scale factor to make it an
unbiased
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
consistent estimator
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter ''θ''0—having the property that as the number of data points used increases indefinitely, the result ...
; see
scale parameter: estimation. For example, the
interquartile range
In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the differen ...
may be rendered an unbiased, consistent estimator for the population standard deviation if the data follow a
normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
and the measure is divided by:
* Where is the inverse error function.
In other situations, it makes more sense to think of a robust measure of scale as an estimator of its own
expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
, interpreted as an alternative to the population standard deviation as a measure of scale. For example, the median absolute deviation (MAD) of a sample from a standard
Cauchy distribution
The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) ...
is an estimator of the population MAD, which in this case is 1, whereas the population variance does not exist.
Statistical efficiency
Robust estimators typically have inferior
statistical efficiency compared to conventional estimators for data drawn from a distribution without outliers, such as a normal distribution. However, they have superior efficiency for data drawn from a
mixture distribution
In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection a ...
or from a
heavy-tailed distribution
In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. Roughly speaking, “heavy-tailed” means the distribu ...
, for which non-robust measures such as the standard deviation should not be used.
For example, for data drawn from the normal distribution, the median absolute deviation is 37% as efficient as the sample standard deviation, while the Rousseeuw–Croux estimator ''Q''
''n'' is 88% as efficient as the sample standard deviation.
Common robust estimators
One of the most common robust measures of scale is the ''
interquartile range
In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the differen ...
'' (IQR), the difference between the 75th
percentile
In statistics, a ''k''-th percentile, also known as percentile score or centile, is a score (e.g., a data point) a given percentage ''k'' of all scores in its frequency distribution exists ("exclusive" definition) or a score a given percentage ...
and the 25th
percentile
In statistics, a ''k''-th percentile, also known as percentile score or centile, is a score (e.g., a data point) a given percentage ''k'' of all scores in its frequency distribution exists ("exclusive" definition) or a score a given percentage ...
of a sample; this is the 25%
trimmed
''Trimmed'' is a 1922 American silent Western film directed by Harry A. Pollard and featuring Hoot Gibson. It is not known whether the film currently survives, and it may be a lost film.
Plot
Cast
* Hoot Gibson as Dale Garland
* Patsy Ru ...
range
Range may refer to:
Geography
* Range (geographic), a chain of hills or mountains; a somewhat linear, complex mountainous or hilly area (cordillera, sierra)
** Mountain range, a group of mountains bordered by lowlands
* Range, a term used to i ...
, an example of an
L-estimator
In statistics, an L-estimator (or L-statistic) is an estimator which is a linear combination of order statistics of the measurements. This can be as little as a single point, as in the median (of an odd number of values), or as many as all points ...
. Other trimmed ranges, such as the
interdecile range
In statistics, the interdecile range is the difference between the first and the ninth deciles (10% and 90%). The interdecile range is a measure of statistical dispersion of the values in a set of data, similar to the range and the interquartile ...
(10% trimmed range) can also be used.
For a Gaussian distribution, IQR is related to
, the
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
, as:
:
Another commonly-used robust measure of scale is the ''
median absolute deviation
In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.
For a u ...
'' (MAD), the
median
The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
of the absolute values of the differences between the data values and the overall median of the data set; for a Gaussian distribution, MAD is related to
as:
:
For details, visit the section on
relation to standard deviation in the main article on MAD.
''Sn'' and ''Qn''
Rousseeuw and Croux proposed 2 alternatives to the Median Absolute Deviation, motivated by two of its weaknesses:
# It is
inefficient (37% efficiency) at
Gaussian distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
s.
# it computes a symmetric statistic about a location estimate, thus not dealing with
skewness
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
For a unimodal ...
.
They propose two alternative statistics based on pairwise differences: ''S
n'' and ''Q
n.''
''S
n'' is defined as:
:
''Q
n'' is defined as:
Where:
* The factor 2.2219 is a consistency constant,
* The set
consists of all pairwise absolute differences between the observations
and
, and
* The subscript
represents the
th order statistic, or
These can be computed in ''O''(''n'' log ''n'') time and ''O''(''n'') space.
Neither of these requires
location
In geography, location or place is used to denote a region (point, line, or area) on Earth's surface. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ambiguous bou ...
estimation, as they are based only on differences between values. They are both more efficient than the MAD under a Gaussian distribution: ''S
n'' is 58% efficient, while ''Q
n'' is 82% efficient.
For a sample from a normal distribution, ''S''
n is approximately unbiased for the population standard deviation even down to very modest sample sizes (<1% bias for ''n'' = 10).
For a large sample from a normal distribution, 2.22''Q''
n is approximately unbiased for the population standard deviation. For small or moderate samples, the expected value of ''Q''
n under a normal distribution depends markedly on the sample size, so finite-sample correction factors (obtained from a table or from simulations) are used to calibrate the scale of ''Q''
n.
The biweight midvariance
Like ''S''
n and ''Q''
n, the biweight midvariance is intended to be robust without sacrificing too much efficiency. It is defined as:
:
where ''I'' is the
indicator function
In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
, ''Q'' is the sample median of the ''X''
i, and
:
Its square root is a robust estimator of scale, since data points are downweighted as their distance from the median increases, with points more than 9 MAD units from the median having no influence at all.
The biweight's efficiency has been estimated at around 84.7% for sets of 20 samples drawn from synthetically generated distributions with added excess kurtosis ("stretched tails"). For Gaussian distributions, its efficiency has been estimated at 98.2%.
Location-scale depth
Mizera and Müller extended the approach offered by Rousseeuw and Hubert by proposing a robust depth-based estimator for location and scale simultaneously, called location-scale depth. It is defined as follows:
Where:
*
is a shorthand for
,
*
and
depend on a fixed density
They suggest that the most tractable version of location-scale depth is the one based on Student's t-distribution.
Confidence intervals
{{cleanup merge, 21=section, Robust confidence intervals
A robust confidence interval is a
robust modification of
confidence intervals, meaning that one modifies the non-robust calculations of the confidence interval so that they are not badly affected by outlying or aberrant observations in a data-set.
Example
In the process of weighing 1000 objects, under practical conditions, it is easy to believe that the operator might make a mistake in procedure and so report an incorrect mass (thereby making one type of
systematic error
Observational error (or measurement error) is the difference between a measurement, measured value of a physical quantity, quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are ...
). Suppose there were 100 objects and the operator weighed them all, one at a time, and repeated the whole process ten times. Then the operator can calculate a sample
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
for each object, and look for
outlier
In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
s. Any object with an unusually large standard deviation probably has an outlier in its data. These can be removed by various non-parametric techniques. If the operator repeated the process only three times, simply taking the
median
The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
of the three measurements and using σ would give a confidence interval. The 200 extra weighings served only to detect and correct for operator error and did nothing to improve the confidence interval. With more repetitions, one could use a
truncated mean
A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, a ...
, discarding the largest and smallest values and averaging the rest. A
bootstrap calculation could be used to determine a confidence interval narrower than that calculated from σ, and so obtain some benefit from a large amount of extra work.
These procedures are
robust against procedural errors which are not modeled by the assumption that the balance has a fixed known standard deviation σ. In practical applications where the occasional operator error can occur, or the balance can malfunction, the assumptions behind simple statistical calculations cannot be taken for granted. Before trusting the results of 100 objects weighed just three times each to have confidence intervals calculated from σ, it is necessary to test for and remove a reasonable number of outliers (testing the assumption that the operator is careful and correcting for the fact that he is not perfect), and to test the assumption that the data really have a
normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
with standard deviation σ.
Computer simulation
The theoretical analysis of such an experiment is complicated, but it is easy to set up a
spreadsheet
A spreadsheet is a computer application for computation, organization, analysis and storage of data in tabular form. Spreadsheets were developed as computerized analogs of paper accounting worksheets. The program operates on data entered in c ...
which draws random numbers from a normal distribution with standard deviation σ to simulate the situation; this can be done in
Microsoft Excel
Microsoft Excel is a spreadsheet editor developed by Microsoft for Microsoft Windows, Windows, macOS, Android (operating system), Android, iOS and iPadOS. It features calculation or computation capabilities, graphing tools, pivot tables, and a ...
using
=NORMINV(RAND(),0,σ))
, as discussed in
[Wittwer, J.W.]
"Monte Carlo Simulation in Excel: A Practical Guide"
June 1, 2004 and the same techniques can be used in other spreadsheet programs such as in
OpenOffice.org Calc and
gnumeric
Gnumeric is a spreadsheet program that is part of the GNOME Free Software Desktop Project. Gnumeric version 1.0 was released on 31 December 2001. Gnumeric is distributed as free software under the GNU General Public License; it is intended to ...
.
After removing obvious outliers, one could subtract the median from the other two values for each object, and examine the distribution of the 200 resulting numbers. It should be normal with mean near zero and standard deviation a little larger than σ. A simple
Monte Carlo
Monte Carlo ( ; ; or colloquially ; , ; ) is an official administrative area of Monaco, specifically the Ward (country subdivision), ward of Monte Carlo/Spélugues, where the Monte Carlo Casino is located. Informally, the name also refers to ...
spreadsheet calculation would reveal typical values for the standard deviation (around 105 to 115% of σ). Or, one could subtract the mean of each triplet from the values, and examine the distribution of 300 values. The mean is identically zero, but the standard deviation should be somewhat smaller (around 75 to 85% of σ).
See also
*
Heteroscedasticity-consistent standard errors
*
Interquartile Range
In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the differen ...
*
Mean Absolute Deviation
References
Robust statistics
Statistical deviation and dispersion
Scale statistics