HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of
dispersion Dispersion may refer to: Economics and finance *Dispersion (finance), a measure for the statistical distribution of portfolio returns *Price dispersion, a variation in prices across sellers of the same item *Wage dispersion, the amount of variatio ...
, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics,
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properti ...
, hypothesis testing,
goodness of fit The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measure ...
, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a
distribution Distribution may refer to: Mathematics * Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations *Probability distribution, the probability of a particular value or value range of a vari ...
, and the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
of the random variable with itself, and it is often represented by \sigma^2, s^2, \operatorname(X), V(X), or \mathbb(X). An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real world system. If all possible observations of the system are present then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below. The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance.


Etymology

The term ''variance'' was first introduced by Ronald Fisher in his 1918 paper '' The Correlation Between Relatives on the Supposition of Mendelian Inheritance'':
The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations \sigma_1 and \sigma_2, it is found that the distribution, when both causes act together, has a standard deviation \sqrt. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...


Definition

The variance of a random variable X is the expected value of the squared deviation from the mean of X, \mu = \operatorname /math>: : \operatorname(X) = \operatorname\left X - \mu)^2 \right This definition encompasses random variables that are generated by processes that are discrete,
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
, neither, or mixed. The variance can also be thought of as the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
of a random variable with itself: : \operatorname(X) = \operatorname(X, X). The variance is also equivalent to the second cumulant of a probability distribution that generates X. The variance is typically designated as \operatorname(X), or sometimes as V(X) or \mathbb(X), or symbolically as \sigma^2_X or simply \sigma^2 (pronounced " sigma squared"). The expression for the variance can be expanded as follows: :\begin \operatorname(X) &= \operatorname\left X_-_\operatorname[X^2\right.html"_;"title=".html"_;"title="X_-_\operatorname[X">X_-_\operatorname[X^2\right">.html"_;"title="X_-_\operatorname[X">X_-_\operatorname[X^2\right\\[4pt.html" ;"title="">X_-_\operatorname[X^2\right.html" ;"title=".html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right">.html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right\\[4pt">">X_-_\operatorname[X^2\right.html" ;"title=".html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right">.html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right\\[4pt&= \operatorname\left[X^2 - 2X\operatorname + \operatorname 2\right] \\ pt&= \operatorname\left ^2\right- 2\operatorname operatorname + \operatorname 2 \\ pt&= \operatorname\left[X^2 \right] - \operatorname 2 \end In other words, the variance of is equal to the mean of the square of minus the square of the mean of . This equation should not be used for computations using floating point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see Algorithms for calculating variance.


Discrete random variable

If the generator of random variable X is discrete with probability mass function x_1 \mapsto p_1, x_2 \mapsto p_2, \ldots, x_n \mapsto p_n, then :\operatorname(X) = \sum_^n p_i\cdot(x_i - \mu)^2, where \mu is the expected value. That is, :\mu = \sum_^n p_i x_i . (When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.) The variance of a collection of n equally likely values can be written as : \operatorname(X) = \frac \sum_^n (x_i - \mu)^2 where \mu is the average value. That is, :\mu = \frac\sum_^n x_i . The variance of a set of n equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other: : \operatorname(X) = \frac \sum_^n \sum_^n \frac(x_i - x_j)^2 = \frac\sum_i \sum_ (x_i-x_j)^2.


Absolutely continuous random variable

If the random variable X has a
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
f(x), and F(x) is the corresponding cumulative distribution function, then :\begin \operatorname(X) = \sigma^2 &= \int_ (x-\mu)^2 f(x) \, dx \\ pt &= \int_ x^2f(x)\,dx -2\mu\int_ xf(x)\,dx + \mu^2\int_ f(x)\,dx \\ pt &= \int_ x^2 \,dF(x) - 2 \mu \int_ x \,dF(x) + \mu^2 \int_ \,dF(x) \\ pt &= \int_ x^2 \,dF(x) - 2 \mu \cdot \mu + \mu^2 \cdot 1 \\ pt &= \int_ x^2 \,dF(x) - \mu^2, \end or equivalently, :\operatorname(X) = \int_ x^2 f(x) \,dx - \mu^2 , where \mu is the expected value of X given by :\mu = \int_ x f(x) \, dx = \int_ x \, d F(x). In these formulas, the integrals with respect to dx and dF(x) are Lebesgue and Lebesgue–Stieltjes integrals, respectively. If the function x^2f(x) is Riemann-integrable on every finite interval ,bsubset\R, then :\operatorname(X) = \int^_ x^2 f(x) \, dx - \mu^2, where the integral is an improper Riemann integral.


Examples


Exponential distribution

The exponential distribution with parameter is a continuous distribution whose
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
is given by :f(x) = \lambda e^ on the interval . Its mean can be shown to be :\operatorname = \int_0^\infty \lambda xe^ \, dx = \frac. Using
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivat ...
and making use of the expected value already calculated, we have: :\begin \operatorname\left ^2\right&= \int_0^\infty \lambda x^2 e^ \, dx \\ &= \left -x^2 e^ \right0^\infty + \int_0^\infty 2xe^ \,dx \\ &= 0 + \frac\operatorname \\ &= \frac. \end Thus, the variance of is given by :\operatorname(X) = \operatorname\left ^2\right- \operatorname 2 = \frac - \left(\frac\right)^2 = \frac.


Fair die

A fair six-sided die can be modeled as a discrete random variable, , with outcomes 1 through 6, each with equal probability 1/6. The expected value of is (1 + 2 + 3 + 4 + 5 + 6)/6 = 7/2. Therefore, the variance of is :\begin \operatorname(X) &= \sum_^6 \frac\left(i - \frac\right)^2 \\ pt &= \frac\left((-5/2)^2 + (-3/2)^2 + (-1/2)^2 + (1/2)^2 + (3/2)^2 + (5/2)^2\right) \\ pt &= \frac \approx 2.92. \end The general formula for the variance of the outcome, , of an die is :\begin \operatorname(X) &= \operatorname\left(X^2\right) - (\operatorname(X))^2 \\ pt &= \frac\sum_^n i^2 - \left(\frac\sum_^n i\right)^2 \\ pt &= \frac - \left(\frac\right)^2 \\ pt &= \frac. \end


Commonly used probability distributions

The following table lists the variance for some commonly used probability distributions.


Properties


Basic properties

Variance is non-negative because the squares are positive or zero: :\operatorname(X)\ge 0. The variance of a constant is zero. :\operatorname(a) = 0. Conversely, if the variance of a random variable is 0, then it is
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
a constant. That is, it always has the same value: :\operatorname(X)= 0 \iff \exists a : P(X=a) = 1.


Issues of finiteness

If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index k satisfies 1 < k \leq 2.


Decomposition

The general formula for variance decomposition or the law of total variance is: If X and Y are two random variables, and the variance of X exists, then :\operatorname \operatorname(\operatorname \mid Y+\operatorname(\operatorname \mid Y. The conditional expectation \operatorname E(X\mid Y) of X given Y, and the conditional variance \operatorname(X\mid Y) may be understood as follows. Given any particular value ''y'' of the random variable ''Y'', there is a conditional expectation \operatorname E(X\mid Y=y) given the event ''Y'' = ''y''. This quantity depends on the particular value ''y''; it is a function g(y) = \operatorname E(X\mid Y=y). That same function evaluated at the random variable ''Y'' is the conditional expectation \operatorname E(X\mid Y) = g(Y). In particular, if Y is a discrete random variable assuming possible values y_1, y_2, y_3 \ldots with corresponding probabilities p_1, p_2, p_3 \ldots, , then in the formula for total variance, the first term on the right-hand side becomes :\operatorname(\operatorname \mid Y = \sum_i p_i \sigma^2_i, where \sigma^2_i = \operatorname \mid Y = y_i/math>. Similarly, the second term on the right-hand side becomes :\operatorname(\operatorname \mid Y = \sum_i p_i \mu_i^2 - \left(\sum_i p_i \mu_i\right)^2 = \sum_i p_i \mu_i^2 - \mu^2, where \mu_i = \operatorname \mid Y = y_i/math> and \mu = \sum_i p_i \mu_i. Thus the total variance is given by :\operatorname = \sum_i p_i \sigma^2_i + \left( \sum_i p_i \mu_i^2 - \mu^2 \right). A similar formula is applied in
analysis of variance Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician ...
, where the corresponding formula is :\mathit_\text = \mathit_\text + \mathit_\text; here \mathit refers to the Mean of the Squares. In linear regression analysis the corresponding formula is :\mathit_\text = \mathit_\text + \mathit_\text. This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated. Similar decompositions are possible for the sum of squared deviations (sum of squares, \mathit): :\mathit_\text = \mathit_\text + \mathit_\text, :\mathit_\text = \mathit_\text + \mathit_\text.


Calculation from the CDF

The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function ''F'' using :2\int_0^\infty u(1 - F(u))\,du - \left(\int_0^\infty (1 - F(u))\,du\right)^2. This expression can be used to calculate the variance in situations where the CDF, but not the
density Density (volumetric mass density or specific mass) is the substance's mass per unit of volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' can also be used. Mathematicall ...
, can be conveniently expressed.


Characteristic property

The second
moment Moment or Moments may refer to: * Present time Music * The Moments, American R&B vocal group Albums * ''Moment'' (Dark Tranquillity album), 2020 * ''Moment'' (Speed album), 1998 * ''Moments'' (Darude album) * ''Moments'' (Christine Guldbrand ...
of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. \mathrm_m\,\mathrm\left(\left(X - m\right)^2\right) = \mathrm(X). Conversely, if a continuous function \varphi satisfies \mathrm_m\,\mathrm(\varphi(X - m)) = \mathrm(X) for all random variables ''X'', then it is necessarily of the form \varphi(x) = a x^2 + b, where . This also holds in the multidimensional case.


Units of measurement

Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is , slightly larger than the expected absolute deviation of 1.5. The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly
heavy-tailed distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distrib ...
.


Propagation


Addition and multiplication by a constant

Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged: :\operatorname(X+a)=\operatorname(X). If all values are scaled by a constant, the variance is scaled by the square of that constant: :\operatorname(aX)=a^2\operatorname(X). The variance of a sum of two random variables is given by :\operatorname(aX+bY)=a^2\operatorname(X)+b^2\operatorname(Y)+2ab\, \operatorname(X,Y), :\operatorname(aX-bY)=a^2\operatorname(X)+b^2\operatorname(Y)-2ab\, \operatorname(X,Y), where \operatorname(X,Y) is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
.


Linear combinations

In general, for the sum of N random variables \, the variance becomes: :\operatorname\left(\sum_^N X_i\right)=\sum_^N\operatorname(X_i,X_j)=\sum_^N\operatorname(X_i)+\sum_\operatorname(X_i,X_j), see also general Bienaymé's identity. These results lead to the variance of a linear combination as: : \begin \operatorname\left( \sum_^N a_iX_i\right) &=\sum_^ a_ia_j\operatorname(X_i,X_j) \\ &=\sum_^N a_i^2\operatorname(X_i)+\sum_a_ia_j\operatorname(X_i,X_j)\\ & =\sum_^N a_i^2\operatorname(X_i)+2\sum_a_ia_j\operatorname(X_i,X_j). \end If the random variables X_1,\dots,X_N are such that :\operatorname(X_i,X_j)=0\ ,\ \forall\ (i\ne j) , then they are said to be
uncorrelated In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, ther ...
. It follows immediately from the expression given earlier that if the random variables X_1,\dots,X_N are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically: :\operatorname\left(\sum_^N X_i\right)=\sum_^N\operatorname(X_i). Since independent random variables are always uncorrelated (see ), the equation above holds in particular when the random variables X_1,\dots,X_n are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.


Matrix notation for the variance of a linear combination

Define X as a column vector of n random variables X_1, \ldots,X_n, and c as a column vector of n scalars c_1, \ldots,c_n. Therefore, c^\mathsf X is a linear combination of these random variables, where c^\mathsf denotes the transpose of c. Also let \Sigma be the covariance matrix of X. The variance of c^\mathsfX is then given by: :\operatorname\left(c^\mathsf X\right) = c^\mathsf \Sigma c . This implies that the variance of the mean can be written as (with a column vector of ones) :\operatorname\left(\bar\right) = \operatorname\left(\frac 1'X\right) = \frac 1'\Sigma 1.


Sum of variables


Sum of uncorrelated variables

One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of
uncorrelated In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, ther ...
random variables is the sum of their variances: :\operatorname\left(\sum_^n X_i\right) = \sum_^n \operatorname(X_i). This statement is called the Bienaymé formula and was discovered in 1853. It is often made with the stronger condition that the variables are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by ''n'' is a linear transformation, this formula immediately implies that the variance of their mean is : \operatorname\left(\overline\right) = \operatorname\left(\frac \sum_^n X_i\right) = \frac\sum_^n \operatorname\left(X_i\right) = \fracn\sigma^2 = \frac. That is, the variance of the mean decreases when ''n'' increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem. To prove the initial statement, it suffices to show that :\operatorname(X + Y) = \operatorname(X) + \operatorname(Y). The general result then follows by induction. Starting with the definition, :\begin \operatorname(X + Y) &= \operatorname\left X + Y)^2\right- (\operatorname + Y^2 \\ pt &= \operatorname\left ^2 + 2XY + Y^2\right- (\operatorname + \operatorname ^2. \end Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of ''X'' and ''Y'', this further simplifies as follows: :\begin \operatorname(X + Y) &= \operatorname\left ^2\right+ 2\operatorname Y+ \operatorname\left ^2\right- \left(\operatorname 2 + 2\operatorname operatorname + \operatorname 2\right) \\ pt &= \operatorname\left ^2\right+ \operatorname\left ^2\right- \operatorname 2 - \operatorname 2 \\ pt &= \operatorname(X) + \operatorname(Y). \end


Sum of correlated variables


=Sum of correlated variables with fixed sample size

= In general, the variance of the sum of variables is the sum of their
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
s: :\operatorname\left(\sum_^n X_i\right) = \sum_^n \sum_^n \operatorname\left(X_i, X_j\right) = \sum_^n \operatorname\left(X_i\right) + 2\sum_\operatorname\left(X_i, X_j\right). (Note: The second equality comes from the fact that .) Here, \operatorname(\cdot,\cdot) is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory. So if the variables have equal variance ''σ''2 and the average correlation of distinct variables is ''ρ'', then the variance of their mean is :\operatorname\left(\overline\right) = \frac + \frac\rho\sigma^2. This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to :\operatorname\left(\overline\right) = \frac + \frac\rho. This formula is used in the
Spearman–Brown prediction formula The Spearman–Brown prediction formula, also known as the Spearman–Brown prophecy formula, is a formula relating psychometric reliability to test length and used by psychometricians to predict the reliability of a test after changing the test le ...
of classical test theory. This converges to ''ρ'' if ''n'' goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have :\lim_ \operatorname\left(\overline\right) = \rho. Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables.


=Sum of uncorrelated variables with random sample size

= There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size is a random variable whose variation adds to the variation of , such that, : \operatorname\left(\sum_^X_i\right)=\operatorname\left \rightoperatorname(X)+\operatorname(N)\operatorname\left \right2 which follows from the law of total variance. If has a Poisson distribution, then \operatorname \operatorname(N) with estimator = . So, the estimator of \operatorname\left(\sum_^X_i\right) becomes n^2+n\bar^2 giving \operatorname(\bar)=\sqrt


Weighted sum of variables

The scaling property and the Bienaymé formula, along with the property of the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
jointly imply that :\operatorname(aX \pm bY) =a^2 \operatorname(X) + b^2 \operatorname(Y) \pm 2ab\, \operatorname(X, Y). This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if ''X'' and ''Y'' are uncorrelated and the weight of ''X'' is two times the weight of ''Y'', then the weight of the variance of ''X'' will be four times the weight of the variance of ''Y''. The expression above can be extended to a weighted sum of multiple variables: :\operatorname\left(\sum_^n a_iX_i\right) = \sum_^na_i^2 \operatorname(X_i) + 2\sum_\sum_a_ia_j\operatorname(X_i,X_j)


Product of variables


Product of independent variables

If two variables X and Y are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
, the variance of their product is given by :\operatorname(XY) = operatorname(X)2 \operatorname(Y) + operatorname(Y)2 \operatorname(X) + \operatorname(X)\operatorname(Y). Equivalently, using the basic properties of expectation, it is given by :\operatorname(XY) = \operatorname\left(X^2\right) \operatorname\left(Y^2\right) - operatorname(X)2 operatorname(Y)2.


Product of statistically dependent variables

In general, if two variables are statistically dependent, then the variance of their product is given by: :\begin \operatorname(XY) = &\operatorname\left ^2 Y^2\right- operatorname(XY)2 \\ pt = &\operatorname\left(X^2, Y^2\right) + \operatorname(X^2)\operatorname\left(Y^2\right) - operatorname(XY)2 \\ pt = &\operatorname\left(X^2, Y^2\right) + \left(\operatorname(X) + operatorname(X)2\right)\left(\operatorname(Y) + operatorname(Y)2\right) \\ pt &- operatorname(X, Y) + \operatorname(X)\operatorname(Y)2 \end


Arbitrary functions

The
delta method In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. History The delta meth ...
uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by :\operatorname\left (X)\right\approx \left(f'(\operatorname\left \right\right)^2\operatorname\left \right/math> provided that ''f'' is twice differentiable and that the mean and variance of ''X'' are finite.


Population variance and sample variance

Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of ...
of ''n'' observations drawn without observational bias from the whole
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction usi ...
of potential observations. In this example that sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest. The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by ''n.'' However, using values other than ''n'' improves the estimator in various ways. Four common values for the denominator are ''n,'' ''n'' − 1, ''n'' + 1, and ''n'' − 1.5: ''n'' is the simplest (population variance of the sample), ''n'' − 1 eliminates bias, ''n'' + 1 minimizes mean squared error for the normal distribution, and ''n'' − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution. Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (''n'' − 1) / ''n''; correcting by this factor (dividing by ''n'' − 1 instead of ''n'') is called '' Bessel's correction''. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when ''n'' = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance. If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean. Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance), and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than ''n'' − 1), and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by ''n'' + 1 (instead of ''n'' − 1 or ''n'') minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation.


Population variance

In general, the ''population variance'' of a ''finite''
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction usi ...
of size ''N'' with values ''x''''i'' is given by :\begin \sigma^2 &= \frac \sum_^N \left(x_i - \mu\right)^2 = \frac \sum_^N \left(x_i^2 - 2\mu x_i + \mu^2 \right) \\ pt &= \left(\frac 1N \sum_^N x_i^2\right) - 2\mu \left(\frac \sum_^N x_i\right) + \mu^2 \\ pt &= \left(\frac \sum_^N x_i^2\right) - \mu^2 \end where the population mean is : \mu = \frac 1N \sum_^N x_i. The population variance can also be computed using :\sigma^2 = \frac \sum_\left( x_i-x_j \right)^2 = \frac \sum_^N\left( x_i-x_j \right)^2. This is true because : \begin &\frac \sum_^N\left( x_i - x_j \right)^2 \\ pt = &\frac \sum_^N\left( x_i^2 - 2x_i x_j + x_j^2 \right) \\ pt = &\frac \sum_^N\left(\frac \sum_^N x_i^2\right) - \left(\frac \sum_^N x_i\right)\left(\frac \sum_^N x_j\right) + \frac \sum_^N\left(\frac \sum_^N x_j^2\right) \\ pt = &\frac \left( \sigma^2 + \mu^2 \right) - \mu^2 + \frac \left( \sigma^2 + \mu^2 \right) \\ pt = &\sigma^2 \end The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.


Sample variance


Biased sample variance

In many practical situations, the true variance of a population is not known ''a priori'' and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of ...
of the population. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution. We take a sample with replacement of ''n'' values ''Y''1, ..., ''Y''''n'' from the population, where ''n'' < ''N'', and estimate the variance on the basis of this sample. Directly taking the variance of the sample data gives the average of the squared deviations: :\tilde_Y^2 = \frac \sum_^n \left(Y_i - \overline\right)^2 = \left(\frac 1n \sum_^n Y_i^2\right) - \overline^2 = \frac \sum_\left(Y_i - Y_j\right)^2. Here, \overline denotes the sample mean: :\overline = \frac \sum_^n Y_i . Since the ''Y''''i'' are selected randomly, both \overline and \tilde_Y^2 are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples of size ''n'' from the population. For \tilde_Y^2 this gives: :\begin \operatorname tilde_Y^2 &= \operatorname\left \frac \sum_^n \left(Y_i - \frac \sum_^n Y_j \right)^2 \right\\ pt &= \frac 1n \sum_^n \operatorname\left Y_i^2 - \frac Y_i \sum_^n Y_j + \frac \sum_^n Y_j \sum_^n Y_k \right\\ pt &= \frac 1n \sum_^n \left( \frac \operatorname\left _i^2\right- \frac \sum_ \operatorname\left _i Y_j\right+ \frac \sum_^n \sum_^n \operatorname\left _j Y_k\right+\frac \sum_^n \operatorname\left _j^2\right\right) \\ pt &= \frac 1n \sum_^n \left \frac \left(\sigma^2 + \mu^2\right) - \frac (n - 1)\mu^2 + \frac n(n - 1)\mu^2 + \frac \left(\sigma^2 + \mu^2\right) \right\\ pt &= \frac \sigma^2. \end Hence \tilde_Y^2 gives an estimate of the population variance that is biased by a factor of \frac. For this reason, \tilde_Y^2 is referred to as the ''biased sample variance''.


Unbiased sample variance

Correcting for this bias yields the ''unbiased sample variance'', denoted S^2: :S^2 = \frac \tilde_Y^2 = \frac \left \frac \sum_^n \left(Y_i - \overline\right)^2 \right= \frac \sum_^n \left(Y_i - \overline \right)^2 Either estimator may be simply referred to as the ''sample variance'' when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution. The use of the term ''n'' − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term ''n'' − 1.5 yields an almost unbiased estimator. The unbiased sample variance is a U-statistic for the function ''ƒ''(''y''1, ''y''2) = (''y''1 − ''y''2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.


Distribution of the sample variance

Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that ''Y''''i'' are independent observations from a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, Cochran's theorem shows that ''S''2 follows a scaled
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...
(see also: asymptotic properties): : (n - 1)\frac\sim\chi^2_. As a direct consequence, it follows that : \operatorname\left(S^2\right) = \operatorname\left(\frac \chi^2_\right) = \sigma^2 , and : \operatorname\left ^2\right= \operatorname\left(\frac \chi^2_\right) = \frac\operatorname\left(\chi^2_\right) = \frac. If the ''Y''''i'' are independent and identically distributed, but not necessarily normally distributed, then : \operatorname\left ^2\right= \sigma^2, \quad \operatorname\left ^2\right= \frac \left(\kappa - 1 + \frac \right) = \frac \left(\mu_4 - \frac\sigma^4\right), where ''κ'' is the kurtosis of the distribution and ''μ''4 is the fourth central moment. If the conditions of the law of large numbers hold for the squared observations, ''S''2 is a consistent estimator of ''σ''2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).


Samuelson's inequality

Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated. Values must lie within the limits \bar y \pm \sigma_Y (n-1)^.


Relations with the harmonic and arithmetic means

It has been shown that for a sample of positive real numbers, : \sigma_y^2 \le 2y_ (A - H), where ''y''max is the maximum of the sample, ''A'' is the arithmetic mean, ''H'' is the harmonic mean of the sample and \sigma_y^2 is the (biased) variance of the sample. This bound has been improved, and it is known that variance is bounded by : \sigma_y^2 \le \frac, : \sigma_y^2 \ge \frac, where ''y''min is the minimum of the sample.


Tests of equality of variances

The
F-test of equality of variances In statistics, an ''F''-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any ''F''-test can be regarded as a comparison of two variances, but the specific case being ...
and the chi square tests are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult. Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal. The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test. Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances.


Moment of inertia

The variance of a probability distribution is analogous to the moment of inertia in
classical mechanics Classical mechanics is a physical theory describing the motion of macroscopic objects, from projectiles to parts of machinery, and astronomical objects, such as spacecraft, planets, stars, and galaxies. For objects governed by classi ...
of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called ''
moment Moment or Moments may refer to: * Present time Music * The Moments, American R&B vocal group Albums * ''Moment'' (Dark Tranquillity album), 2020 * ''Moment'' (Speed album), 1998 * ''Moments'' (Darude album) * ''Moments'' (Christine Guldbrand ...
s'' of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of ''n'' points with a covariance matrix of \Sigma is given by :I = n\left(\mathbf_ \operatorname(\Sigma) - \Sigma\right). This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the ''x'' axis and distributed along it. The covariance matrix might look like :\Sigma = \begin10 & 0 & 0 \\ 0 & 0.1 & 0 \\ 0 & 0 & 0.1\end. That is, there is the most variance in the ''x'' direction. Physicists would consider this to have a low moment ''about'' the ''x'' axis so the moment-of-inertia tensor is :I = n\begin0.2 & 0 & 0 \\ 0 & 10.1 & 0 \\ 0 & 0 & 10.1\end.


Semivariance

The ''semivariance'' is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:\text = \sum_(x_-\mu)^It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not. For inequalities associated with the semivariance, see .


Generalizations


For complex variables

If x is a scalar complex-valued random variable, with values in \mathbb, then its variance is \operatorname\left x - \mu)(x - \mu)^*\right where x^* is the complex conjugate of x. This variance is a real scalar.


For vector-valued random variables


As a matrix

If X is a vector-valued random variable, with values in \mathbb^n, and thought of as a column vector, then a natural generalization of variance is \operatorname\left X - \mu)(X - \mu)^\right where \mu = \operatorname(X) and X^ is the transpose of X, and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the ''covariance matrix''). If X is a vector- and complex-valued random variable, with values in \mathbb^n, then the covariance matrix is \operatorname\left X - \mu)(X - \mu)^\dagger\right where X^\dagger is the conjugate transpose of X. This matrix is also positive semi-definite and square.


As a scalar

Another generalization of variance for vector-valued random variables X, which results in a scalar value rather than in a matrix, is the generalized variance \det(C), the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean. A different generalization is obtained by considering the
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore ...
between the random variable and its mean. This results in \operatorname\left X - \mu)^(X - \mu)\right= \operatorname(C), which is the trace of the covariance matrix.


See also

* Bhatia–Davis inequality * Coefficient of variation *
Homoscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. Th ...
* Least-squares spectral analysis for computing a frequency spectrum with spectral magnitudes in % of variance or in dB * Popoviciu's inequality on variances * Measures for statistical dispersion * Variance-stabilizing transformation


Types of variance

* Correlation * Distance variance * Explained variance * Pooled variance * Pseudo-variance


References

{{Authority control Moment (mathematics) Statistical deviation and dispersion Articles containing proofs