Median topics
   HOME

TheInfoList



OR:

In statistics and
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
, the median is the value separating the higher half from the lower half of a
data sample In statistics, quality assurance, and survey methodology, sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attem ...
, a
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction using a ...
, or a probability distribution. For a
data set A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the ...
, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the '' ari ...
(often simply described as the "average") is that it is not
skewed In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimoda ...
by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in
robust statistics Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, su ...
, as it is the most
resistant statistic Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such ...
, having a
breakdown point Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such ...
of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.


Finite data set of numbers

The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest. If the data set has an odd number of observations, the middle one is selected. For example, the following list of seven numbers, : 1, 3, 3, 6, 7, 8, 9 has the median of ''6'', which is the fourth value. If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be the arithmetic mean of the two middle values. For example, this data set of 8 numbers : 1, 2, 3, 4, 5, 6, 8, 9 has a median value of ''4.5'', that is (4 + 5)/2. (In more technical terms, this interprets the median as the fully trimmed
mid-range In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set: :M=\frac. The mid-range is closely related to the range, a measure of ...
). In general, with this convention, the median can be defined as follows: For a data set x of n elements, ordered from smallest to greatest, : if n is odd, \mathrm(x) = x_ : if n is even, \mathrm(x) = \frac


Formal definition

Formally, a median of a
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction using a ...
is any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains less than half the population, then some of the population is exactly equal to the unique median. The median is well-defined for any ordered (one-dimensional) data, and is independent of any
distance metric In mathematics, a metric space is a set together with a notion of ''distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general sett ...
. The median can thus be applied to classes which are ranked but not numerical (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between classes if there is an even number of cases. A
geometric median In geometry, the geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances ...
, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is the
medoid Medoids are representative objects of a data set or a cluster within a data set whose sum of dissimilarities to all the objects in the cluster is minimal. Medoids are similar in concept to means or centroids, but medoids are always restricted to be ...
. There is no widely accepted standard notation for the median, but some authors represent the median of a variable ''x'' either as ''x͂'' or as ''μ''1/2 sometimes also ''M''. In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced. The median is a special case of other ways of summarizing the typical values associated with a statistical distribution: it is the 2nd quartile, 5th
decile In descriptive statistics, a decile is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample or population. A decile is one possible form of a quantile; others include the quartile ...
, and 50th
percentile In statistics, a ''k''-th percentile (percentile score or centile) is a score ''below which'' a given percentage ''k'' of scores in its frequency distribution falls (exclusive definition) or a score ''at or below which'' a given percentage fal ...
.


Uses

The median can be used as a measure of
location In geography, location or place are used to denote a region (point, line, or area) on Earth's surface or elsewhere. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ...
when one attaches reduced importance to extreme values, typically because a distribution is
skewed In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimoda ...
, extreme values are not known, or outliers are untrustworthy, i.e., may be measurement/transcription errors. For example, consider the
multiset In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that e ...
: 1, 2, 2, 2, 3, 14. The median is 2 in this case, as is the
mode Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is ...
, and it might be seen as a better indication of the
center Center or centre may refer to: Mathematics *Center (geometry), the middle of an object * Center (algebra), used in various contexts ** Center (group theory) ** Center (ring theory) * Graph center, the set of all vertices of minimum eccentrici ...
than the arithmetic mean of 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see below. As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated. Because the median is simple to understand and easy to calculate, while also a robust approximation to the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the '' ari ...
, the median is a popular
summary statistic In descriptive statistics, summary statistics are used to summarize a set of observations, in order to communicate the largest amount of information as simply as possible. Statisticians commonly try to describe the observations in * a measure of ...
in
descriptive statistics A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and an ...
. In this context, there are several choices for a measure of variability: the
range Range may refer to: Geography * Range (geographic), a chain of hills or mountains; a somewhat linear, complex mountainous or hilly area (cordillera, sierra) ** Mountain range, a group of mountains bordered by lowlands * Range, a term used to i ...
, the
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
, the
mean absolute deviation The average absolute deviation (AAD) of a data set is the average of the Absolute value, absolute Deviation (statistics), deviations from a central tendency, central point. It is a summary statistics, summary statistic of statistical dispersion or ...
, and the
median absolute deviation In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a u ...
. For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient when—and only when— data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions. Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.


Probability distributions

For any
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
-valued probability distribution with cumulative distribution function ''F'', a median is defined as any real number ''m'' that satisfies the inequalities \int_ dF(x) \geq \frac \text \int_ dF(x) \geq \frac. An equivalent phrasing uses a random variable ''X'' distributed according to ''F'': \operatorname(X\leq m) \geq \frac\text \operatorname(X\geq m) \geq \frac Note that this definition does not require ''X'' to have an absolutely continuous distribution (which has a
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
''f''), nor does it require a discrete one. In the former case, the inequalities can be upgraded to equality: a median satisfies \operatorname(X \leq m) = \int_^m = \frac = \int_m^ = \operatorname(X\geq m). Any probability distribution on R has at least one median, but in pathological cases there may be more than one median: if ''F'' is constant 1/2 on an interval (so that ''f''=0 there), then any value of that interval is a median.


Medians of particular distributions

The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
: * The median of a symmetric
unimodal distribution In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. Unimodal probability distribution In statistics, a unimodal ...
coincides with the mode. * The median of a
symmetric distribution In statistics, a symmetric probability distribution is a probability distribution—an assignment of probabilities to possible occurrences—which is unchanged when its probability density function (for continuous probability distribution) ...
which possesses a mean ''μ'' also takes the value ''μ''. ** The median of a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
with mean ''μ'' and variance ''σ''2 is μ. In fact, for a normal distribution, mean = median = mode. ** The median of a uniform distribution (continuous), uniform distribution in the interval [''a'', ''b''] is (''a'' + ''b'') / 2, which is also the mean. * The median of a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
with location parameter ''x''0 and scale parameter ''y'' is ''x''0, the location parameter. * The median of a Power law, power law distribution ''x''−''a'', with exponent ''a'' > 1 is 21/(''a'' − 1)''x''min, where ''x''min is the minimum value for which the power law holds * The median of an exponential distribution with rate parameter ''λ'' is the natural logarithm of 2 divided by the rate parameter: ''λ''−1ln 2. * The median of a Weibull distribution with shape parameter ''k'' and scale parameter ''λ'' is ''λ''(ln 2)1/''k''.


Properties


Optimality property

The ''mean absolute error'' of a real variable ''c'' with respect to the random variable ''X'' is :E(\left, X-c\)\, Provided that the probability distribution of ''X'' is such that the above expectation exists, then ''m'' is a median of ''X'' if and only if ''m'' is a minimizer of the mean absolute error with respect to ''X''. In particular, ''m'' is a sample median if and only if ''m'' minimizes the arithmetic mean of the absolute deviations. More generally, a median is defined as a minimum of :E(, X-c, - , X, ), as discussed below in the section on multivariate medians (specifically, the spatial median). This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering, ''k''-medians clustering.


Inequality relating means and medians

If the distribution has finite variance, then the distance between the median \tilde and the mean \bar is bounded by one standard deviation. This bound was proved by Book and Sher in 1979 for discrete samples, and more generally by Page and Murty in 1982. In a comment on a subsequent proof by O'Cinneide, Mallows in 1991 presented a compact proof that uses Jensen's inequality twice, as follows. Using , ·, for the absolute value, we have : \begin , \mu - m, = , \operatorname(X - m), & \leq \operatorname(, X - m, ) \\ & \leq \operatorname(, X - \mu, ) \\ & \leq \sqrt = \sigma. \end The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes the absolute deviation function a \mapsto \operatorname(, X-a, ). Mallows's proof can be generalized to obtain a multivariate version of the inequality simply by replacing the absolute value with a norm (mathematics), norm: : \, \mu - m\, \leq \sqrt = \sqrt where ''m'' is a spatial median, that is, a minimizer of the function a \mapsto \operatorname(\, X-a\, ).\, The spatial median is unique when the data-set's dimension is two or more. An alternative proof uses the one-sided Chebyshev inequality; it appears in An inequality on location and scale parameters#An application - distance between the mean and the median, an inequality on location and scale parameters. This formula also follows directly from Cantelli's inequality.


Unimodal distributions

For the case of Unimodality, unimodal distributions, one can achieve a sharper bound on the distance between the median and the mean: : \left, \tilde - \bar\ \le \left(\frac\right)^\frac\sigma \approx 0.7746\sigma. A similar relation holds between the median and the mode: : \left, \tilde - \mathrm\ \le 3^\frac\sigma \approx 1.732\sigma.


Jensen's inequality for medians

Jensen's inequality states that for any random variable ''X'' with a finite expectation ''E''[''X''] and for any convex function ''f'' : f[ E(x) ] \le E[ f(x) ] This inequality generalizes to the median as well. We say a function is a C function if, for any ''t'', : f^\left( \,(-\infty, t]\, \right) = \ is a closed interval (allowing the degenerate cases of a singleton (mathematics), single point or an empty set). Every convex function is a C function, but the reverse does not hold. If ''f'' is a C function, then : f(\operatorname[X]) \le \operatorname[ f(X)] If the medians are not unique, the statement holds for the corresponding suprema.


Medians for samples


The sample median


Efficient computation of the sample median

Even though sorting algorithm, comparison-sorting ''n'' items requires operations, selection algorithms can compute the order statistic, th-smallest of items with only operations. This includes the median, which is the th order statistic (or for an even number of samples, the arithmetic mean of the two middle order statistics). Selection algorithms still have the downside of requiring memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is John Tukey, Tukey's ''ninther'', which is the median of three rule applied with limited recursion: if is the sample laid out as an array (data structure), array, and :, then : The ''remedian'' is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.


Sampling distribution

The distributions of both the sample mean and the sample median were determined by Pierre-Simon Laplace, Laplace. The distribution of the sample median from a population with a density function f(x) is asymptotically normal with mean \mu and variance : \frac where m is the median of f(x) and n is the sample size. A modern proof follows below. Laplace's result is now understood as a special case of Quantile#Estimating quantiles from a sample, the asymptotic distribution of arbitrary quantiles. For normal samples, the density is f(m)=1/\sqrt, thus for large samples the variance of the median equals (/)\cdot(\sigma^2/n). (See also section #Efficiency below.)


= Derivation of the asymptotic distribution

= We take the sample size to be an odd number N = 2n + 1 and assume our variable continuous; the formula for the case of discrete variables is given below in . The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilities F(v) , f(v) and 1 - F(v) . For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the point v directly from the trinomial distribution: : \Pr[\operatorname=v]\,dv=\frac F(v)^n(1 - F(v))^nf(v)\, dv. Now we introduce the beta function. For integer arguments \alpha and \beta , this can be expressed as \Beta(\alpha,\beta) = \frac . Also, recall that f(v)\,dv = dF(v) . Using these relationships and setting both \alpha and \beta equal to n+1 allows the last expression to be written as : \frac \, dF(v) Hence the density function of the median is a symmetric beta distribution pushforward measure, pushed forward by F. Its mean, as we would expect, is 0.5 and its variance is 1/(4(N+2)) . By the chain rule, the corresponding variance of the sample median is : \frac. The additional 2 is negligible limit (mathematics), in the limit.


=Empirical local density

= In practice, the functions f and F are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations: Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression for \Pr(\operatorname = v) ; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities: : \Pr(\operatorname = v) = \sum_^n \sum_^n \frac F(v-1)^i(1 - F(v))^kf(v)^ Here, ''i'' is the number of points strictly less than the median and ''k'' the number strictly greater. Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics. The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error.


Estimation of variance from sample data

The value of (2 f(x))^—the asymptotic value of n^ (\nu - m) where \nu is the population median—has been studied by several authors. The standard "delete one" Resampling (statistics)#Jackknife, jackknife method produces consistent estimator, inconsistent results. An alternative—the "delete k" method—where k grows with the sample size has been shown to be asymptotically consistent. This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent, but converges very slowly (computational complexity theory, order of n^). Other methods have been proposed but their behavior may differ between large and small samples.


Efficiency

The efficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size N = 2n + 1 from the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, the efficiency for large N is : \frac \frac The efficiency tends to \frac as N tends to infinity. In other words, the relative variance of the median will be \pi/2 \approx 1.57, or 57% greater than the variance of the mean – the relative standard error of the median will be (\pi/2)^\frac \approx 1.25, or 25% greater than the standard error of the mean, \sigma/\sqrt (see also section #Sampling distribution above.).


Other estimators

For univariate distributions that are ''symmetric'' about one median, the Hodges–Lehmann estimator is a robust statistics, robust and highly Efficiency (statistics), efficient estimator of the population median. If data is represented by a statistical model specifying a particular family of probability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. Pareto interpolation is an application of this when the population is assumed to have a Pareto distribution.


Multivariate median

Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.


Marginal median

The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.


Geometric median

The
geometric median In geometry, the geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances ...
of a discrete set of sample points x_1,\ldots x_N in a Euclidean space is the point minimizing the sum of distances to the sample points. :\hat\mu = \underset \sum_^ \left \, \mu-x_n \right \, _2 In contrast to the marginal median, the geometric median is equivariant with respect to Euclidean Similarity (geometry), similarity transformations such as translation (geometry), translations and rotation (mathematics), rotations.


Median in all directions

If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions". This concept is relevant to voting theory on account of the median voter theorem. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions).


Centerpoint

An alternative generalization of the median in higher dimensions is the Centerpoint (geometry), centerpoint.


Other median-related concepts


Interpolated median

When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is a Likert scale, on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the median m is 3 since the median is the smallest value of x for which F(x) is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval width w to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the values f(x) are known, the interpolated median can be calculated from : m_\text = m + w\left[\frac - \frac\right]. Alternatively, if in an observed sample there are k scores above the median category, j scores in it and i scores below it then the interpolated median is given by : m_\text = m + \frac \left[\frac j\right].


Pseudo-median

For univariate distributions that are ''symmetric'' about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population ''pseudo-median'', which is the median of a symmetrized distribution and which is close to the population median. The Hodges–Lehmann estimator has been generalized to multivariate distributions.


Variants of regression

The Theil–Sen estimator is a method for robust statistics, robust linear regression based on finding medians of slopes.


Median filter

The median filter is an important tool of image processing, that can effectively remove any salt and pepper noise from grayscale images.


Cluster analysis

In cluster analysis, the k-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in k-means clustering, is replaced by maximising the distance between cluster-medians.


Median–median line

This is a method of robust regression. The idea dates back to Abraham Wald, Wald in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter x: a left half with values less than the median and a right half with values greater than the median. He suggested taking the means of the dependent y and independent x variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set. Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples. Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means. Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.


Median-unbiased estimators

Any Bias of an estimator, ''mean''-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function, as observed by Gauss. A Bias of an estimator#Median unbiased estimators, and bias with respect to other loss functions, ''median''-unbiased estimator minimizes the risk with respect to the Absolute deviation, absolute-deviation loss function, as observed by Laplace. Other loss functions are used in statistical theory, particularly in
robust statistics Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, su ...
. The theory of median-unbiased estimators was revived b
George W. Brown
in 1947: Further properties of median-unbiased estimators have been reported. Median-unbiased estimators are invariant under Injective function, one-to-one transformations. There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions having monotone likelihood ratio, monotone likelihood-functions. One such procedure is an analogue of the Rao–Blackwell theorem, Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class of loss functions.


History

Scientific researchers in the ancient near east appear not to have used summary statistics altogether, instead choosing values that offered maximal consistency with a broader theory that integrated a wide variety of phenomena. Within the Mediterranean (and, later, European) scholarly community, statistics like the mean are fundamentally a medieval and early modern development. (The history of the median outside Europe and its predecessors remains relatively unstudied.) The idea of the median appeared in the 6th century in the Talmud, in order to fairly analyze divergent Economic appraisal, appraisals. However, the concept did not spread to the broader scientific community. Instead, the closest ancestor of the modern median is the
mid-range In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set: :M=\frac. The mid-range is closely related to the range, a measure of ...
, invented by Al-Biruni. Transmission of Al-Biruni's work to later scholars is unclear. Al-Biruni applied his technique to assaying metals, but, after he published his work, most assayers still adopted the most unfavorable value from their results, lest they appear to Debasement, cheat. However, increased navigation at sea during the Age of Discovery meant that ship's navigators increasingly had to attempt to determine latitude in unfavorable weather against hostile shores, leading to renewed interest in summary statistics. Whether rediscovered or independently invented, the mid-range is recommended to nautical navigators in Harriot's "Instructions for Raleigh's Voyage to Guiana, 1595". The idea of the median may have first appeared in Edward Wright (mathematician), Edward Wright's 1599 book ''Certaine Errors in Navigation'' on a section about compass navigation. Wright was reluctant to discard measured values, and may have felt that the median — incorporating a greater proportion of the dataset than the
mid-range In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set: :M=\frac. The mid-range is closely related to the range, a measure of ...
— was more likely to be correct. However, Wright did not give examples of his technique's use, making it hard to verify that he described the modern notion of median. The median (in the context of probability) certainly appeared in the correspondence of Christiaan Huygens, but as an example of a statistic that was inappropriate for Actuarial science, actuarial practice. The earliest recommendation of the median dates to 1757, when Roger Joseph Boscovich developed a regression method based on the L1 norm, ''L''1 norm and therefore implicitly on the median. In 1774, Pierre-Simon Laplace, Laplace made this desire explicit: he suggested the median be used as the standard estimator of the value of a posterior Probability density function, PDF. The specific criterion was to minimize the expected magnitude of the error; , \alpha - \alpha^, where \alpha^ is the estimate and \alpha is the true value. To this end, Laplace determined the distributions of both the sample mean and the sample median in the early 1800s.Laplace PS de (1818) ''Deuxième supplément à la Théorie Analytique des Probabilités'', Paris, Courcier However, a decade later, Carl Friedrich Gauss, Gauss and Adrien-Marie Legendre, Legendre developed the least squares method, which minimizes (\alpha - \alpha^)^ to obtain the mean. Within the context of regression, Gauss and Legendre's innovation offers vastly easier computation. Consequently, Laplaces' proposal was generally rejected until the rise of Computing device#Analog computers, computing devices 150 years later (and is still a relatively uncommon algorithm). Antoine Augustin Cournot in 1843 was the first to use the term ''median'' (''valeur médiane'') for the value that divides a probability distribution into two equal halves. Gustav Theodor Fechner used the median (''Centralwerth'') in sociological and psychological phenomena.Keynes, J.M. (1921) ''A Treatise on Probability''. Pt II Ch XVII §5 (p 201) (2006 reprint, Cosimo Classics, : multiple other reprints) It had earlier been used only in astronomy and related fields. Gustav Theodor Fechner, Gustav Fechner popularized the median into the formal analysis of data, although it had been used previously by Laplace, and the median appeared in a textbook by Francis Ysidro Edgeworth, F. Y. Edgeworth. Francis Galton used the English term ''median'' in 1881,Galton F (1881) "Report of the Anthropometric Committee" pp 245–260
''Report of the 51st Meeting of the British Association for the Advancement of Science''
/ref> having earlier used the terms ''middle-most value'' in 1869, and the ''medium'' in 1880. ''personal.psu.edu''
/ref> Statisticians encouraged the use of medians intensely throughout the 19th century for its intuitive clarity and ease of manual computation. However, the notion of median does not lend itself to the theory of higher moments as well as the arithmetic mean does, and is much harder to compute by computer. As a result, the median was steadily supplanted as a notion of generic average by the arithmetic mean during the 20th century.


See also

* Absolute deviation * Bias of an estimator * Central tendency * Concentration of measure for Lipschitz functions * Median graph * Median of medians – Algorithm to calculate the approximate median in linear time * Median search * Median slope * Median voter theory * Medoids – Generalization of the median in higher dimensions


Notes


References


External links

*
Median as a weighted arithmetic mean of all Sample Observations

On-line calculator



A problem involving the mean, the median, and the mode.
*
Python script
for Median computations and income inequality metrics
Fast Computation of the Median by Successive Binning

'Mean, median, mode and skewness'
A tutorial devised for first-year psychology students at Oxford University, based on a worked example.
The Complex SAT Math Problem Even the College Board Got Wrong
Andrew Daniels in ''Popular Mechanics'' {{Statistics, descriptive Means Robust statistics