Bessel's correction
   HOME

TheInfoList



OR:

In
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the
sample variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
and sample standard deviation, where ''n'' is the number of observations in a
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of ...
. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
in these estimations. This technique is named after Friedrich Bessel.


Formulation

In estimating the population
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
from a sample when the population mean is unknown, the uncorrected sample variance is the ''mean'' of the squares of deviations of sample values from the sample mean (i.e. using a multiplicative factor 1/''n''). In this case, the sample variance is a biased estimator of the population variance. Multiplying the uncorrected sample variance by the factor : \frac n gives an ''unbiased'' estimator of the population variance. In some literature, the above factor is called Bessel's correction. One can understand Bessel's correction as the
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
in the residuals vector (residuals, not errors, because the population mean is unknown): : (x_1-\overline,\,\dots,\,x_n-\overline), where \overline is the sample mean. While there are ''n'' independent observations in the sample, there are only ''n'' − 1 independent residuals, as they sum to 0. For a more intuitive explanation of the need for Bessel's correction, see . Generally Bessel's correction is an approach to reduce the bias due to finite sample size. Such finite-sample bias correction is also needed for other estimates like skew and
kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
, but in these the inaccuracies are often significantly larger. To fully remove such bias it is necessary to do a more complex multi-parameter estimation. For instance a correct correction for the standard deviation depends on the kurtosis (normalized central 4th moment), but this again has a finite sample bias and it depends on the standard deviation, i.e. both estimations have to be merged.


Caveats

There are three caveats to consider regarding Bessel's correction: # It does not yield an unbiased estimator of standard ''deviation''. # The corrected estimator often has a higher
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
(MSE) than the uncorrected estimator. Furthermore, there is no population distribution for which it has the minimum MSE because a different scale factor can always be chosen to minimize MSE. # It is only necessary when the population mean is unknown (and estimated as the sample mean). In practice, this generally happens. Firstly, while the sample variance (using Bessel's correction) is an unbiased estimator of the population variance, its
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
, the sample standard deviation, is a ''biased'' estimate of the population standard deviation; because the square root is a
concave function In mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex. Definition A real-valued function f on an ...
, the bias is downward, by
Jensen's inequality In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier pr ...
. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see
unbiased estimation of standard deviation In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure of statistical dispersion) of a population of val ...
for details. An approximation for the exact correction factor for the normal distribution is given by using ''n'' − 1.5 in the formula: the bias decays quadratically (rather than linearly, as in the uncorrected form and Bessel's corrected form). Secondly, the unbiased estimator does not minimize mean squared error (MSE), and generally has worse MSE than the uncorrected estimator (this varies with
excess kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosi ...
). MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by ''n'' + 1 (instead of ''n'' − 1 or ''n''). Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating ''both'' population mean ''and'' population variance from a given sample, using the sample mean to estimate the population mean. In that case there are ''n'' degrees of freedom in a sample of ''n'' points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining ''n'' − 1 degrees of freedom (the ''residuals'') go to the sample variance. However, if the population mean is known, then the deviations of the observations from the population mean have ''n'' degrees of freedom (because the mean is not being estimated – the deviations are not residuals but ''errors'') and Bessel's correction is not applicable.


Source of bias

Most simply, to understand the bias that needs correcting, think of an extreme case. Suppose the population is (0,0,0,1,2,9), which has a population mean of 2 and a population variance of 10 1/3. A sample of ''n'' = 1 is drawn, and it turns out to be x_1=0. The best estimate of the population mean is \bar = x_1/n = 0/1 = 0. But what if we use the formula (x_1-\bar)^2/n = (0-0)/1 = 0 to estimate the variance? The estimate of the variance would be zero – and the estimate would be zero for any population and any sample of ''n'' = 1. The problem is that in estimating the sample mean, the process has already made our estimate of the mean close to the value we sampled--identical, for ''n'' = 1. In the case of ''n'' = 1, the variance just cannot be estimated, because there is no variability in the sample. But consider ''n'' = 2. Suppose the sample were (0, 2). Then \bar=1 and \left x_1-\bar)^2 + (x_2-\bar)^2\right/n = (1+1)/2 = 1, but with Bessel's correction, \left x_1-\bar)^2 + (x_2-\bar)^2\right/(n-1) = (1+1)/1 = 2, which is an unbiased estimate (if all possible samples of ''n'' = 2 are taken and this method is used, the average estimate will be 12.4, same as the sample variance with Bessel's correction.) To see this in more detail, consider the following example. Suppose the mean of the whole population is 2050, but the statistician does not know that, and must estimate it based on this small sample chosen randomly from the population: : 2051,\quad 2053,\quad 2055,\quad 2050,\quad 2051 One may compute the sample average: : \frac\left(2051 + 2053 + 2055 + 2050 + 2051\right) = 2052 This may serve as an observable estimate of the unobservable population average, which is 2050. Now we face the problem of estimating the population variance. That is the average of the squares of the deviations from 2050. If we knew that the population average is 2050, we could proceed as follows: : \begin & \frac\left 2051 - 2050)^2 + (2053 - 2050)^2 + (2055 - 2050)^2 + (2050 - 2050)^2 + (2051 - 2050)^2\right\\ pt = & \frac = 7.2 \end But our estimate of the population average is the sample average, 2052. The actual average, 2050, is unknown. So the sample average, 2052, must be used: : \begin & \frac\left 2051 - 2052)^2 + (2053 - 2052)^2 + (2055 - 2052)^2 + (2050 - 2052)^2 + (2051 - 2052)^2\right\\ pt = & \frac = 3.2 \end The variance is now a lot smaller. As proven below, the variance will almost always be smaller when calculated using the sum of squared distances to the sample mean, compared to using the sum of squared distances to the population mean. The one exception to this is when the sample mean happens to be equal to the population mean, in which case the variance is also equal. To see why this happens, we use a simple identity in algebra: : (a+b)^2 = a^2 + 2ab + b^2 With a representing the deviation of an individual sample from the sample mean, and b representing the deviation of the sample mean from the population mean. Note that we've simply decomposed the actual deviation of an individual sample from the (unknown) population mean into two components: the deviation of the single sample from the sample mean, which we can compute, and the additional deviation of the sample mean from the population mean, which we can not. Now, we apply this identity to the squares of deviations from the population mean: : \begin \,\underbrace_\,]^2 & = ,\overbrace^ + \overbrace^\,2 \\ & = \overbrace^ + \overbrace^ + \overbrace^ \end Now apply this to all five observations and observe certain patterns: : \begin \overbrace^\ &+\ \overbrace^\ &&+\ \overbrace^ \\ (2053 - 2052)^2\ &+\ 2(2053 - 2052)(2052 - 2050)\ &&+\ (2052 - 2050)^2 \\ (2055 - 2052)^2\ &+\ 2(2055 - 2052)(2052 - 2050)\ &&+\ (2052 - 2050)^2 \\ (2050 - 2052)^2\ &+\ 2(2050 - 2052)(2052 - 2050)\ &&+\ (2052 - 2050)^2 \\ (2051 - 2052)^2\ &+\ \underbrace_\ &&+\ (2052 - 2050)^2 \end The sum of the entries in the middle column must be zero because the term ''a'' will be added across all 5 rows, which itself must equal zero. That is because ''a'' contains the 5 individual samples (left side within parentheses) which – when added – naturally have the same sum as adding 5 times the sample mean of those 5 numbers (2052). This means that a subtraction of these two sums must equal zero. The factor 2 and the term b in the middle column are equal for all rows, meaning that the relative difference across all rows in the middle column stays the same and can therefore be disregarded. The following statements explain the meaning of the remaining columns: * The sum of the entries in the first column (''a''2) is the sum of the squares of the distance from sample to sample mean; *The sum of the entries in the last column (''b''2) is the sum of squared distances between the measured sample mean and the correct population mean * Every single row now consists of pairs of ''a''2 (biased, because the sample mean is used) and ''b''2 (correction of bias, because it takes the difference between the "real" population mean and the inaccurate sample mean into account). Therefore the sum of all entries of the first and last column now represents the correct variance, meaning that now the sum of squared distance between samples and population mean is used * The sum of the ''a''2-column and the b2-column must be bigger than the sum within entries of the ''a''2-column, since all the entries within the b2-column are positive (except when the population mean is the same as the sample mean, in which case all of the numbers in the last column will be 0). Therefore: * The sum of squares of the distance from samples to the ''population'' mean will always be bigger than the sum of squares of the distance to the ''sample'' mean, except when the sample mean happens to be the same as the population mean, in which case the two are equal. That is why the sum of squares of the deviations from the ''sample'' mean is too small to give an unbiased estimate of the population variance when the average of those squares is found. The smaller the sample size, the larger is the difference between the sample variance and the population variance.


Terminology

This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators (unbiased sample variation, less biased sample standard deviation), using ''n'' − 1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions: *''μ'' is the population mean *\overline is the sample mean *''σ''2 is the population variance *''sn''2 is the biased sample variance (i.e. without Bessel's correction) *''s''2 is the unbiased sample variance (i.e. with Bessel's correction) The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators: *''sn'' is the uncorrected sample standard deviation (i.e. without Bessel's correction) *''s'' is the corrected sample standard deviation (i.e. with Bessel's correction), which is less biased, but still biased


Formula

The sample mean is given by \overline=\frac\sum_^n x_i. The biased sample variance is then written: s_n^2 = \frac \sum_^n \left(x_i - \overline \right)^ 2 = \frac - \frac and the unbiased sample variance is written: s^2 = \frac \sum_^n \left(x_i - \overline \right)^ 2 = \frac - \frac = \left(\frac\right)\,s_n^2.


Proof of correctness


Alternative 1

As a background fact, we use the identity E ^2= \mu^2 + \sigma^2 which follows from the definition of the standard deviation and linearity of expectation. A very helpful observation is that for any distribution, the variance equals half the expected value of (x_1 - x_2)^2 when x_1, x_2 are an independent sample from that distribution. To prove this observation we will use that E
_1x_2 1X or 1-X may refer to: * 1X Band, a musical group from Slovenia * 1. X. 1905, a piano composition by Leoš Janáček * Saab 9-1X * Alberta Highway 1X; see Alberta Highway 1A * NY 1X; see Hutchinson River Parkway * SSH 1X (WA); see List of form ...
= E _1 _2/math> (which follows from the fact that they are independent) as well as linearity of expectation: :E x_1 - x_2)^2= E _1^2- E x_1x_2+ E _2^2= (\sigma^2 + \mu^2) - 2\mu^2 + (\sigma^2 + \mu^2) = 2\sigma^2 Now that the observation is proven, it suffices to show that the expected squared difference of two observations from the sample population x_1, \ldots, x_n equals (n-1)/n times the expected squared difference of two observations from the original distribution. To see this, note that when we pick x_u and x_v via ''u'', ''v'' being integers selected independently and uniformly from 1 to ''n'', a fraction n/n^2 = 1/n of the time we will have ''u'' = ''v'' and therefore the sampled squared difference is zero independent of the original distribution. The remaining 1-1/n of the time, the value of E x_u-x_v)^2/math> is the expected squared difference between two independent observations from the original distribution. Therefore, dividing the sample expected squared difference by (1-1/n), or equivalently multiplying by 1/(1-1/n) = n/(n-1), gives an unbiased estimate of the original expected squared difference.


Alternative 2

Recycling an identity for variance, : \begin \sum_^n \left(x_i - \overline \right)^2 &= \sum_^n \left( x_i^2 - 2 x_i \overline + \overline^2 \right) \\ &= \sum_^n x_i^2 - 2 \overline \sum_^n x_i + \sum_^n \overline^2 \\ &= \sum_^n x_i^2 - 2n \overline^2 + n \overline^2 \\ &= \sum_^n x_i^2 - n \overline^2 \end and by definition, : \begin \operatorname(s^2) & = \operatorname\left(\sum_^n \frac \right)\\ & = \frac\operatorname\left(\sum_^n x_i^2 - n \overline^2 \right)\\ & = \frac\left sum_^n \operatorname\left(x_i^2\right) - n \operatorname\left(\overline^2 \right)\right\ & = \frac\left sum_^n \left\ - n \left\\right\end Note that, since ''x''1, ''x''2, …, ''xn'' are a random sample from a distribution with mean ''μ'' and variance ''σ''2, it follows that for each ''i'' = 1, 2, …, ''n'': : \operatorname(x_i) = \mu    and    \operatorname(x_i) = \sigma^2 and also : \operatorname(\overline) = \mu    and    \operatorname(\overline) = \frac n This is a property of the variance of uncorrelated variables, arising from the Bienaymé formula. The required result is then obtained by substituting these two formulae: : \begin \operatorname(s^2) &= \frac\left sum_^n \left(\sigma^2+\mu^2\right) - n\left(\frac+\mu^2\right)\right\\ &= \frac\left \sigma^2+n\mu^2 - \sigma^2-n\mu^2\right\\ &= \frac(n-1)\sigma^2 \\ &= \sigma^2. \end


Alternative 3

The expected discrepancy between the biased estimator and the true variance is : \begin \operatorname \left \sigma^2 - s_n^2 \right&= \operatorname\left \frac \sum_^n(x_i - \mu)^2 - \frac\sum_^n (x_i - \overline)^2 \right\\ &= \operatorname\left \frac \sum_^n\left((x_i^2 - 2 x_i \mu + \mu^2) - (x_i^2 - 2 x_i \overline + \overline^2)\right) \right\\ &= \operatorname\left \frac \sum_^n\left(\mu^2 - \overline^2 + 2 x_i (\overline-\mu) \right) \right\\ &= \operatorname\left \mu^2 - \overline^2 + \frac \sum_^n 2 x_i (\overline - \mu) \right\\ &= \operatorname\left \mu^2 - \overline^2 + 2(\overline - \mu) \overline \right\\ &= \operatorname\left \mu^2 - 2 \overline \mu + \overline^2 \right\\ &= \operatorname\left (\overline - \mu)^2 \right\\ &= \operatorname (\overline) \\ &= \frac \end So, the expected value of the biased estimator will be : \operatorname \left s^2_n \right= \sigma^2 - \frac = \frac \sigma^2 So, an unbiased estimator should be given by : s^2 = \frac s_n^2


Intuition

In the biased estimator, by using the sample mean instead of the true mean, you are underestimating each ''x''''i'' − ''µ'' by ''x'' − ''µ''. We know that the variance of a sum is the sum of the variances (for uncorrelated variables). So, to find the discrepancy between the biased estimator and the true variance, we just need to find the expected value of (''x'' − ''µ'')2. This is just the variance of the sample mean, which is ''σ''2/''n''. So, we expect that the biased estimator underestimates ''σ''2 by ''σ''2/''n'', and so the biased estimator = (1 − 1/''n'') × the unbiased estimator = (''n'' − 1)/n × the unbiased estimator.


See also

*
Bias of an estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In s ...
*
Standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, whil ...
*
Unbiased estimation of standard deviation In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure of statistical dispersion) of a population of val ...
*
Jensen's inequality In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier pr ...


Notes


External links

*
Animated experiment demonstrating the correction, at Khan Academy
{{DEFAULTSORT:Bessel's Correction Statistical deviation and dispersion Estimation methods Articles containing proofs