Algorithms for calculating variance
   HOME

TheInfoList



OR:

Algorithms for calculating variance play a major role in
computational statistics Computational statistics, or statistical computing, is the bond between statistics and computer science. It means statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computin ...
. A key difficulty in the design of good
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s for this problem is that formulas for the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
may involve sums of squares, which can lead to
numerical instability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
as well as to
arithmetic overflow Arithmetic () is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th ce ...
when dealing with large values.


Naïve algorithm

A formula for calculating the variance of an entire
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction usi ...
of size ''N'' is: :\sigma^2 = \overline - \bar x^2 = \frac . Using
Bessel's correction In statistics, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the sample variance and sample standard deviation, where ''n'' is the number of observations in a sample. This method corrects the bias in ...
to calculate an unbiased estimate of the population variance from a finite
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of ...
of ''n'' observations, the formula is: :s^2 = \left(\frac n - \left( \frac n \right)^2\right) \cdot \frac . Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
* Let * For each datum : ** ** ** *
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by ''n'' instead of ''n'' − 1 on the last line. Because and can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the
floating-point arithmetic In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
used to perform the computation. Thus this algorithm should not be used in practice, and several alternate, numerically stable, algorithms have been proposed. This is particularly bad if the standard deviation is small relative to the mean.


Computing shifted data

The variance is invariant with respect to changes in a
location parameter In geography, location or place are used to denote a region (point, line, or area) on Earth's surface or elsewhere. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ...
, a property which can be used to avoid the catastrophic cancellation in this formula. :\operatorname(X-K)=\operatorname(X). with K any constant, which leads to the new formula :\sigma^2 = \frac . the closer K is to the mean value the more accurate the result will be, but just choosing a value inside the samples range will guarantee the desired stability. If the values (x_i - K) are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur. If just the first sample is taken as K the algorithm can be written in
Python programming language Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically-typed and garbage-collected. It supports multiple programming p ...
as def shifted_data_variance(data): if len(data) < 2: return 0.0 K = data n = Ex = Ex2 = 0.0 for x in data: n += 1 Ex += x - K Ex2 += (x - K)**2 variance = (Ex2 - Ex**2 / n) / (n - 1) # use n instead of (n-1) if want to compute the exact variance of the given data # use (n-1) if data are samples of a larger population return variance This formula also facilitates the incremental computation that can be expressed as K = Ex = Ex2 = 0.0 n = 0 def add_variable(x): global K, n, Ex, Ex2 if n

0: K = x n += 1 Ex += x - K Ex2 += (x - K)**2 def remove_variable(x): global K, n, Ex, Ex2 n -= 1 Ex -= x - K Ex2 -= (x - K)**2 def get_mean(): global K, n, Ex return K + Ex / n def get_variance(): global n, Ex, Ex2 return (Ex2 - Ex**2 / n) / (n - 1)


Two-pass algorithm

An alternative approach, using a different formula for the variance, first computes the sample mean, :\bar x = \frac n, and then computes the sum of the squares of the differences from the mean, :\text = s^2 = \dfrac , where ''s'' is the standard deviation. This is given by the following code: def two_pass_variance(data): n = len(data) mean = sum(data) / n variance = sum( (x-mean)**2 for x in data / (n-1) return variance This algorithm is numerically stable if ''n'' is small. However, the results of both of these simple algorithms ("naïve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as
compensated summation In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious appro ...
can be used to combat this error to a degree.


Welford's online algorithm

It is often useful to be able to compute the variance in a single pass, inspecting each value x_i only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an
online algorithm In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an o ...
, a
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
is required between quantities from which the required statistics can be calculated in a numerically stable fashion. The following formulas can be used to update the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set. For a data set, the '' ar ...
and (estimated) variance of the sequence, for an additional element ''x''''n''. Here, \overline_n = \frac \sum_^n x_i denotes the sample mean of the first ''n'' samples (x_1,\dots,x_n), \sigma^2_n = \frac \sum_^n \left(x_i - \overline_n \right)^2 their biased sample variance, and s^2_n = \frac \sum_^n \left(x_i - \overline_n \right)^2 their unbiased sample variance. :\bar x_n = \frac = \bar x_ + \frac :\sigma^2_n = \frac = \sigma^2_ + \frac. :s^2_n = \frac \, s^2_ + \frac = s^2_ + \frac - \frac, \quad n>1 These formulas suffer from numerical instability, as they repeatedly subtract a small number from a big number which scales with ''n''. A better quantity for updating is the sum of squares of differences from the current mean, \sum_^n (x_i - \bar x_n)^2, here denoted M_: : \begin M_ & = M_ + (x_n - \bar x_)(x_n - \bar x_n) \\ pt\sigma^2_n & = \frac \\ pts^2_n & = \frac \end This algorithm was found by Welford, and it has been thoroughly analyzed. It is also common to denote M_k = \bar x_k and S_k = M_. An example Python implementation for Welford's algorithm is given below. # For a new value newValue, compute the new count, new mean, the new M2. # mean accumulates the mean of the entire dataset # M2 aggregates the squared distance from the mean # count aggregates the number of samples seen so far def update(existingAggregate, newValue): (count, mean, M2) = existingAggregate count += 1 delta = newValue - mean mean += delta / count delta2 = newValue - mean M2 += delta * delta2 return (count, mean, M2) # Retrieve the mean, variance and sample variance from an aggregate def finalize(existingAggregate): (count, mean, M2) = existingAggregate if count < 2: return float("nan") else: (mean, variance, sampleVariance) = (mean, M2 / count, M2 / (count - 1)) return (mean, variance, sampleVariance) This algorithm is much less prone to loss of precision due to
catastrophic cancellation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals. The
parallel algorithm In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can do multiple operations in a given time. It has been a tradition of computer science to describe serial algorithms in abstract machin ...
below illustrates how to merge multiple sets of statistics calculated online.


Weighted incremental algorithm

The algorithm can be extended to handle unequal sample weights, replacing the simple counter ''n'' with the sum of weights seen so far. West (1979) suggests this incremental algorithm: def weighted_incremental_variance(data_weight_pairs): w_sum = w_sum2 = mean = S = 0 for x, w in data_weight_pairs: w_sum = w_sum + w w_sum2 = w_sum2 + w**2 mean_old = mean mean = mean_old + (w / w_sum) * (x - mean_old) S = S + w * (x - mean_old) * (x - mean) population_variance = S / w_sum # Bessel's correction for weighted samples # Frequency weights sample_frequency_variance = S / (w_sum - 1) # Reliability weights sample_reliability_variance = S / (w_sum - w_sum2 / w_sum)


Parallel algorithm

Chan et al.. note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary sets A and B: :\begin n_ & = n_A + n_B \\ \delta & = \bar x_B - \bar x_A \\ \bar x_ & = \bar x_A + \delta\cdot\frac \\ M_ & = M_ + M_ + \delta^2\cdot\frac \\ \end. This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input. Chan's method for estimating the mean is numerically unstable when n_A \approx n_B and both are large, because the numerical error in \delta = \bar x_B - \bar x_A is not scaled down in the way that it is in the n_B = 1 case. In such cases, prefer \bar x_ = \frac. def parallel_variance(n_a, avg_a, M2_a, n_b, avg_b, M2_b): n = n_a + n_b delta = avg_b - avg_a M2 = M2_a + M2_b + delta ** 2 * n_a * n_b / n var_ab = M2 / (n - 1) return var_ab This can be generalized to allow parallelization with AVX, with GPUs, and computer clusters, and to covariance.


Example

Assume that all floating point operations use standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly. Next consider the sample (, , , ), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30. While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (, , , ). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as −170.66666666666666. This is a serious problem with naïve algorithm and is due to
catastrophic cancellation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
in the subtraction of two similar numbers at the final stage of the algorithm.


Higher-order statistics

Terriberry extends Chan's formulae to calculating the third and fourth
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s, needed for example when estimating
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
and
kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
: : \begin M_ = M_ + M_ & + \delta^3\frac + 3\delta\frac \\ ptM_ = M_ + M_ & + \delta^4\frac \\ pt & + 6\delta^2\frac + 4\delta\frac \end Here the M_k are again the sums of powers of differences from the mean \sum(x - \overline)^k, giving : \begin & \text = g_1 = \frac, \\ pt& \text = g_2 = \frac-3. \end For the incremental case (i.e., B = \), this simplifies to: : \begin \delta & = x - m \\ ptm' & = m + \frac \\ ptM_2' & = M_2 + \delta^2 \frac \\ ptM_3' & = M_3 + \delta^3 \frac - \frac \\ ptM_4' & = M_4 + \frac + \frac - \frac \end By preserving the value \delta / n, only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost. An example of the online algorithm for kurtosis implemented as described is: def online_kurtosis(data): n = mean = M2 = M3 = M4 = 0 for x in data: n1 = n n = n + 1 delta = x - mean delta_n = delta / n delta_n2 = delta_n ** 2 term1 = delta * delta_n * n1 mean = mean + delta_n M4 = M4 + term1 * delta_n2 * (n**2 - 3*n + 3) + 6 * delta_n2 * M2 - 4 * delta_n * M3 M3 = M3 + term1 * delta_n * (n - 2) - 3 * delta_n * M2 M2 = M2 + term1 # Note, you may also calculate variance using M2, and skewness using M3 # Caution: If all the inputs are the same, M2 will be 0, resulting in a division by 0. kurtosis = (n * M4) / (M2 ** 2) - 3 return kurtosis Pébaÿ further extends these results to arbitrary-order
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s, for the incremental and the pairwise cases, and subsequently Pébaÿ et al. for weighted and compound moments. One can also find there similar formulas for
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
. Choi and Sweetman offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a
one-pass algorithm In computing, a one-pass algorithm or single-pass algorithm is a streaming algorithm which reads its input exactly once. It does so by processing items in order, without unbounded buffering; it reads a block into an input buffer, processes it, a ...
for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin: : H(x_k)=\frac where h(x_k) and H(x_k) represent the frequency and the relative frequency at bin x_k and A= \sum_^K h(x_k) \,\Delta x_k is the total area of the histogram. After this normalization, the n raw moments and central moments of x(t) can be calculated from the relative histogram: : m_n^ = \sum_^ x_k^n H(x_k) \, \Delta x_k = \frac \sum_^K x_k^n h(x_k) \, \Delta x_k : \theta_n^= \sum_^ \Big(x_k-m_1^\Big)^n \, H(x_k) \, \Delta x_k = \frac \sum_^ \Big(x_k-m_1^\Big)^n h(x_k) \, \Delta x_k where the superscript ^ indicates the moments are calculated from the histogram. For constant bin width \Delta x_k=\Delta x these two expressions can be simplified using I= A/\Delta x: : m_n^= \frac \sum_^K x_k^n \, h(x_k) : \theta_n^= \frac \sum_^K \Big(x_k-m_1^\Big)^n h(x_k) The second approach from Choi and Sweetman is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times. If Q sets of statistical moments are known: (\gamma_,\mu_,\sigma^2_,\alpha_,\alpha_) \quad for q=1,2,\ldots,Q , then each \gamma_n can be expressed in terms of the equivalent n raw moments: : \gamma_= m_ \gamma_ \qquad \quad \textrm \quad n=1,2,3,4 \quad \text \quad q = 1,2, \dots ,Q where \gamma_ is generally taken to be the duration of the q^ time-history, or the number of points if \Delta t is constant. The benefit of expressing the statistical moments in terms of \gamma is that the Q sets can be combined by addition, and there is no upper limit on the value of Q. : \gamma_= \sum_^Q \gamma_ \quad \quad \text n=0,1,2,3,4 where the subscript _c represents the concatenated time-history or combined \gamma. These combined values of \gamma can then be inversely transformed into raw moments representing the complete concatenated time-history : m_=\frac \quad \text n=1,2,3,4 Known relationships between the raw moments (m_n) and the central moments ( \theta_n = \operatorname E x-\mu)^n) are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments: : \mu_c=m_ \qquad \sigma^2_c=\theta_ \qquad \alpha_=\frac \qquad \alpha_=-3


Covariance

Very similar algorithms can be used to compute the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
.


Naïve algorithm

The naïve algorithm is :\operatorname(X,Y) = \frac . For the algorithm above, one could use the following Python code: def naive_covariance(data1, data2): n = len(data1) sum1 = sum(data1) sum2 = sum(data2) sum12 = sum( i1*i2 for i1,i2 in zip(data1,data2) covariance = (sum12 - sum1 * sum2 / n) / n return covariance


With estimate of the mean

As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values k_x and k_y, it can be written: :\operatorname(X,Y) = \operatorname(X-k_x,Y-k_y) = \dfrac . and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as: def shifted_data_covariance(data_x, data_y): n = len(data_x) if n < 2: return 0 kx = data_x ky = data_y Ex = Ey = Exy = 0 for ix, iy in zip(data_x, data_y): Ex += ix - kx Ey += iy - ky Exy += (ix - kx) * (iy - ky) return (Exy - Ex * Ey / n) / n


Two-pass

The two-pass algorithm first computes the sample means, and then the covariance: :\bar x = \sum_^n x_i/n :\bar y = \sum_^n y_i/n :\operatorname(X,Y) = \frac . The two-pass algorithm may be written as: def two_pass_covariance(data1, data2): n = len(data1) mean1 = sum(data1) / n mean2 = sum(data2) / n covariance = 0 for i1, i2 in zip(data1, data2): a = i1 - mean1 b = i2 - mean2 covariance += a * b / n return covariance A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums \sum_i x_i and \sum_i y_i ''should'' be zero, but the second pass compensates for any small error.


Online

A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment C_n = \sum_^n (x_i - \bar x_n)(y_i - \bar y_n): :\begin \bar x_n &= \bar x_ &\,+\,& \frac \\ pt\bar y_n &= \bar y_ &\,+\,& \frac \\ ptC_n &= C_ &\,+\,& (x_n - \bar x_n)(y_n - \bar y_) \\ pt &= C_ &\,+\,& (x_n - \bar x_)(y_n - \bar y_n) \end The apparent asymmetry in that last equation is due to the fact that (x_n - \bar x_n) = \frac(x_n - \bar x_), so both update terms are equal to \frac(x_n - \bar x_)(y_n - \bar y_). Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals. Thus the covariance can be computed as :\begin \operatorname_N(X,Y) = \frac &= \frac\\ &= \frac\\ &= \frac\\ &= \frac. \end def online_covariance(data1, data2): meanx = meany = C = n = 0 for x, y in zip(data1, data2): n += 1 dx = x - meanx meanx += dx / n meany += (y - meany) / n C += dx * (y - meany) population_covar = C / n # Bessel's correction for sample variance sample_covar = C / (n - 1) A small modification can also be made to compute the weighted covariance: def online_weighted_covariance(data1, data2, data3): meanx = meany = 0 wsum = wsum2 = 0 C = 0 for x, y, w in zip(data1, data2, data3): wsum += w wsum2 += w * w dx = x - meanx meanx += (w / wsum) * dx meany += (w / wsum) * (y - meany) C += w * dx * (y - meany) population_covar = C / wsum # Bessel's correction for sample variance # Frequency weights sample_frequency_covar = C / (wsum - 1) # Reliability weights sample_reliability_covar = C / (wsum - wsum2 / wsum) Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation: :C_X = C_A + C_B + (\bar x_A - \bar x_B)(\bar y_A - \bar y_B)\cdot\frac.


Weighted batched version

A version of the weighted online algorithm that does batched updated also exists: let w_1, \dots w_N denote the weights, and write :\begin \bar x_ &= \bar x_n &\,+\,& \frac \\ \bar y_ &= \bar y_n &\,+\,& \frac \\ C_ &= C_n &\,+\,& \sum_^ w_i (x_i - \bar x_)(y_i - \bar y_n) \\ &= C_n &\,+\,& \sum_^ w_i (x_i - \bar x_n)(y_i - \bar y_) \\ \end The covariance can then be computed as :\operatorname_N(X,Y) = \frac


See also

*
Kahan summation algorithm In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite- precision floating-point numbers, compared to the obvious a ...
*
Squared deviations from the mean Squared deviations from the mean (SDM) result from squaring deviations. In probability theory and statistics, the definition of '' variance'' is either the expected value of the SDM (when considering a theoretical distribution) or its average va ...
*
Yamartino method The Yamartino method is an algorithm for calculating an approximation of the standard deviation of wind direction during a single pass through the incoming data. Background The standard deviation of wind direction is a measure of lateral turbulen ...


References


External links

* {{DEFAULTSORT:Algorithms For Calculating Variance Statistical algorithms Statistical deviation and dispersion Articles with example pseudocode Articles with example Python (programming language) code