In
statistics
Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the bias of an estimator (or bias function) is the difference between this
estimator's
expected value and the
true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In statistics, "bias" is an property of an estimator. Bias is a distinct concept from
consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see
bias versus consistency for more.
All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in
unbiased estimation of standard deviation); because a biased estimator may be unbiased with respect to different measures of
central tendency; because a biased estimator gives a lower value of some
loss function (particularly
mean squared error) compared with unbiased estimators (notably in
shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful.
Bias can also be measured with respect to the
median, rather than the mean (expected value), in which case one distinguishes ''median''-unbiased from the usual ''mean''-unbiasedness property.
Mean-unbiasedness is not preserved under non-linear
transformations, though median-unbiasedness is (see ); for example, the
sample variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
is a biased estimator for the population variance. These are all illustrated below.
Definition
Suppose we have a
statistical model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form ...
, parameterized by a real number ''θ'', giving rise to a probability distribution for observed data,
, and a statistic
which serves as an
estimator of ''θ'' based on any observed data
. That is, we assume that our data follow some unknown distribution
(where ''θ'' is a fixed, unknown constant that is part of this distribution), and then we construct some estimator
that maps observed data to values that we hope are close to ''θ''. The bias of
relative to
is defined as
:
where
denotes
expected value over the distribution
(i.e., averaging over all possible observations
). The second equation follows since ''θ'' is measurable with respect to the conditional distribution
.
An estimator is said to be unbiased if its bias is equal to zero for all values of parameter ''θ'', or equivalently, if the expected value of the estimator matches that of the parameter.
In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the
mean signed difference.
Examples
Sample variance
The
sample variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of a random variable demonstrates two aspects of estimator bias: firstly, the naive estimator is biased, which can be corrected by a scale factor; second, the unbiased estimator is not optimal in terms of
mean squared error (MSE), which can be minimized by using a different scale factor, resulting in a biased estimator with lower MSE than the unbiased estimator. Concretely, the naive estimator sums the squared deviations and divides by ''n,'' which is biased. Dividing instead by ''n'' − 1 yields an unbiased estimator. Conversely, MSE can be minimized by dividing by a different number (depending on distribution), but this results in a biased estimator. This number is always larger than ''n'' − 1, so this is known as a
shrinkage estimator, as it "shrinks" the unbiased estimator towards zero; for the normal distribution the optimal value is ''n'' + 1.
Suppose ''X''
1, ..., ''X''
''n'' are
independent and identically distributed (i.i.d.) random variables with
expectation ''μ'' and
variance ''σ''
2. If the
sample mean and uncorrected
sample variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
are defined as
:
then ''S''
2 is a biased estimator of ''σ''
2, because
:
To continue, we note that by subtracting
from both sides of
, we get
:
Meaning, (by cross-multiplication)
. Then, the previous becomes:
:
This can be seen by noting the following formula, which follows from the
Bienaymé formula, for the term in the inequality for the expectation of the uncorrected sample variance above:
.
In other words, the expected value of the uncorrected sample variance does not equal the population variance ''σ''
2, unless multiplied by a normalization factor. The sample mean, on the other hand, is an unbiased
estimator of the population mean ''μ''.
Note that the usual definition of sample variance is
, and this is an unbiased estimator of the population variance.
Algebraically speaking,
is unbiased because:
:
where the transition to the second line uses the result derived above for the biased estimator. Thus
, and therefore
is an unbiased estimator of the population variance, ''σ''
2. The ratio between the biased (uncorrected) and unbiased estimates of the variance is known as
Bessel's correction.
The reason that an uncorrected sample variance, ''S''
2, is biased stems from the fact that the sample mean is an
ordinary least squares (OLS) estimator for ''μ'':
is the number that makes the sum
as small as possible. That is, when any other number is plugged into this sum, the sum can only increase. In particular, the choice
gives,
:
and then
:
The above discussion can be understood in geometric terms: the vector
can be decomposed into the "mean part" and "variance part" by projecting to the direction of
and to that direction's orthogonal complement hyperplane. One gets
for the part along
and
for the complementary part. Since this is an orthogonal decomposition, Pythagorean theorem says
, and taking expectations we get
, as above (but times
).
If the distribution of
is rotationally symmetric, as in the case when
are sampled from a Gaussian, then on average, the dimension along
contributes to
equally as the
directions perpendicular to
, so that
and
. This is in fact true in general, as explained above.
Estimating a Poisson probability
A far more extreme case of a biased estimator being better than any unbiased estimator arises from the
Poisson distribution. Suppose that ''X'' has a Poisson distribution with expectation ''λ''. Suppose it is desired to estimate
:
with a sample of size 1. (For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and ''λ'' is the average number of calls per minute, then ''e''
−2''λ'' is the probability that no calls arrive in the next two minutes.)
Since the expectation of an unbiased estimator ''δ''(''X'') is equal to the
estimand, i.e.
:
the only function of the data constituting an unbiased estimator is
:
To see this, note that when decomposing e
−''λ'' from the above expression for expectation, the sum that is left is a
Taylor series expansion of e
−''λ'' as well, yielding e
−''λ''e
−''λ'' = e
−2''λ'' (see
Characterizations of the exponential function).
If the observed value of ''X'' is 100, then the estimate is 1, although the true value of the quantity being estimated is very likely to be near 0, which is the opposite extreme. And, if ''X'' is observed to be 101, then the estimate is even more absurd: It is −1, although the quantity being estimated must be positive.
The (biased)
maximum likelihood estimator
:
is far better than this unbiased estimator. Not only is its value always positive but it is also more accurate in the sense that its
mean squared error
:
is smaller; compare the unbiased estimator's MSE of
:
The MSEs are functions of the true value ''λ''. The bias of the maximum-likelihood estimator is:
:
Maximum of a discrete uniform distribution
The bias of maximum-likelihood estimators can be substantial. Consider a case where ''n'' tickets numbered from 1 through to ''n'' are placed in a box and one is selected at random, giving a value ''X''. If ''n'' is unknown, then the maximum-likelihood estimator of ''n'' is ''X'', even though the expectation of ''X'' given ''n'' is only (''n'' + 1)/2; we can be certain only that ''n'' is at least ''X'' and is probably more. In this case, the natural unbiased estimator is 2''X'' − 1.
Median-unbiased estimators
The theory of
median-unbiased estimators was revived by George W. Brown in 1947:
Further properties of median-unbiased estimators have been noted by Lehmann, Birnbaum, van der Vaart and Pfanzagl. In particular, median-unbiased estimators exist in cases where mean-unbiased and
maximum-likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statis ...
estimators do not exist. They are invariant under
one-to-one transformations.
There are methods of construction median-unbiased estimators for probability distributions that have
monotone likelihood-functions, such as one-parameter exponential families, to ensure that they are optimal (in a sense analogous to minimum-variance property considered for mean-unbiased estimators).
One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss-functions.
Bias with respect to other loss functions
Any minimum-variance ''mean''-unbiased estimator minimizes the
risk
In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value (such as health, well-being, wealth, property or the environm ...
(
expected loss Expected loss is the sum of the values of all possible losses, each multiplied by the probability of that loss occurring.
In bank lending (homes, autos, credit cards, commercial lending, etc.) the expected loss on a loan varies over time for a nu ...
) with respect to the squared-error
loss function (among mean-unbiased estimators), as observed by
Gauss.
A minimum-
average absolute deviation ''median''-unbiased estimator minimizes the risk with respect to the
absolute Absolute may refer to:
Companies
* Absolute Entertainment, a video game publisher
* Absolute Radio, (formerly Virgin Radio), independent national radio station in the UK
* Absolute Software Corporation, specializes in security and data risk manag ...
loss function (among median-unbiased estimators), as observed by
Laplace.
Other loss functions are used in statistics, particularly in
robust statistics
Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, su ...
.
Effect of transformations
For univariate parameters, median-unbiased estimators remain median-unbiased under
transformations that preserve order (or reverse order).
Note that, when a transformation is applied to a mean-unbiased estimator, the result need not be a mean-unbiased estimator of its corresponding population statistic. By
Jensen's inequality, a
convex function as transformation will introduce positive bias, while a
concave function will introduce negative bias, and a function of mixed convexity may introduce bias in either direction, depending on the specific function and distribution. That is, for a non-linear function ''f'' and a mean-unbiased estimator ''U'' of a parameter ''p'', the composite estimator ''f''(''U'') need not be a mean-unbiased estimator of ''f''(''p''). For example, the
square root
In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or ⋅ ) is . For example, 4 and −4 are square roots of 16, because .
...
of the unbiased estimator of the population
variance is a mean-unbiased estimator of the population
standard deviation: the square root of the unbiased
sample variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
, the corrected
sample standard deviation, is biased. The bias depends both on the sampling distribution of the estimator and on the transform, and can be quite involved to calculate – see
unbiased estimation of standard deviation for a discussion in this case.
Bias, variance and mean squared error
While bias quantifies the ''average'' difference to be expected between an estimator and an underlying parameter, an estimator based on a finite sample can additionally be expected to differ from the parameter due to the randomness in the sample.
An estimator that minimises the bias will not necessarily minimise the mean square error.
One measure which is used to try to reflect both types of difference is the
mean square error,
:
This can be shown to be equal to the square of the bias, plus the variance:
:
When the parameter is a vector, an analogous decomposition applies:
:
where
is the trace (diagonal sum) of the covariance matrix of the estimator and
is the square vector norm.
Example: Estimation of population variance
For example, suppose an estimator of the form
:
is sought for the population variance as above, but this time to minimise the MSE:
:
If the variables ''X''
1 ... ''X''
''n'' follow a normal distribution, then ''nS''
2/σ
2 has a
chi-squared distribution with ''n'' − 1 degrees of freedom, giving:
:
and so
:
With a little algebra it can be confirmed that it is ''c'' = 1/(''n'' + 1) which minimises this combined loss function, rather than ''c'' = 1/(''n'' − 1) which minimises just the square of the bias.
More generally it is only in restricted classes of problems that there will be an estimator that minimises the MSE independently of the parameter values.
However it is very common that there may be perceived to be a ''
bias–variance tradeoff'', such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall.
Bayesian view
Most bayesians are rather unconcerned about unbiasedness (at least in the formal sampling-theory sense above) of their estimates. For example, Gelman and coauthors (1995) write: "From a Bayesian perspective, the principle of unbiasedness is reasonable in the limit of large samples, but otherwise it is potentially misleading."
Fundamentally, the difference between the
Bayesian approach and the sampling-theory approach above is that in the sampling-theory approach the parameter is taken as fixed, and then probability distributions of a statistic are considered, based on the predicted sampling distribution of the data. For a Bayesian, however, it is the ''data'' which are known, and fixed, and it is the unknown parameter for which an attempt is made to construct a probability distribution, using
Bayes' theorem:
:
Here the second term, the
likelihood of the data given the unknown parameter value θ, depends just on the data obtained and the modelling of the data generation process. However a Bayesian calculation also includes the first term, the
prior probability for θ, which takes account of everything the analyst may know or suspect about θ ''before'' the data comes in. This information plays no part in the sampling-theory approach; indeed any attempt to include it would be considered "bias" away from what was pointed to purely by the data. To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be "unbiased" in sampling theory terms.
But the results of a Bayesian approach can differ from the sampling theory approach even if the Bayesian tries to adopt an "uninformative" prior.
For example, consider again the estimation of an unknown population variance σ
2 of a Normal distribution with unknown mean, where it is desired to optimise ''c'' in the expected loss function
: