HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and statistics, covariance is a measure of the joint variability of two
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
s. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. A distinction must be made between (1) the covariance of two random variables, which is a
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction using ...
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
that can be seen as a property of the
joint probability distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considere ...
, and (2) the sample covariance, which in addition to serving as a descriptor of the sample, also serves as an estimated value of the population parameter.


Definition

For two jointly distributed real-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
s X and Y with finite
second moment In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mas ...
s, the covariance is defined as the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
(or mean) of the product of their deviations from their individual expected values: \operatorname(X, Y) = \operatorname where \operatorname /math> is the expected value of X, also known as the mean of X. The covariance is also sometimes denoted \sigma_ or \sigma(X,Y), in analogy to
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values: : \begin \operatorname(X, Y) &= \operatorname\left left(X - \operatorname\left[X\rightright) \left(Y - \operatorname\left[Y\right">\right.html" ;"title="left(X - \operatorname\left[X\right">left(X - \operatorname\left[X\rightright) \left(Y - \operatorname\left[Y\rightright)\right] \\ &= \operatorname\left[X Y - X \operatorname\left \right- \operatorname\left \rightY + \operatorname\left \right\operatorname\left \rightright] \\ &= \operatorname\left Y\right- \operatorname\left \right\operatorname\left \right- \operatorname\left \right\operatorname\left \right+ \operatorname\left \right\operatorname\left \right\\ &= \operatorname\left Y\right- \operatorname\left \right\operatorname\left \right \end but this equation is susceptible to catastrophic cancellation (see the section on numerical computation below). The
units of measurement A unit of measurement is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same kind of quantity. Any other quantity of that kind can be expressed as a mul ...
of the covariance \operatorname(X, Y) are those of X times those of Y. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)


Definition for complex random variables

The covariance between two complex random variables Z, W is defined as :\operatorname(Z, W) = \operatorname\left Z - \operatorname[Z\overline\right">.html" ;"title="Z - \operatorname[Z">Z - \operatorname[Z\overline\right= \operatorname\left[Z\overline\right] - \operatorname[Z]\operatorname\left[\overline\right] Notice the complex conjugation of the second factor in the definition. A related ''pseudo-covariance'' can also be defined.


Discrete random variables

If the (real) random variable pair (X,Y) can take on the values (x_i,y_i) for i=1,\ldots,n, with equal probabilities p_i=1/n, then the covariance can be equivalently written in terms of the means \operatorname /math> and \operatorname /math> as :\operatorname (X,Y)=\frac\sum_^n (x_i-E(X))(y_i-E(Y)). It can also be equivalently expressed, without directly referring to the means, as : \operatorname(X,Y) = \frac \sum_^n \sum_^n \frac(x_i - x_j)(y_i - y_j) = \frac \sum_i \sum_ (x_i-x_j)(y_i - y_j). More generally, if there are n possible realizations of (X,Y), namely (x_i,y_i) but with possibly unequal probabilities p_i for i=1,\ldots,n, then the covariance is :\operatorname (X,Y)=\sum_^n p_i (x_i-E(X)) (y_i-E(Y)).


Example

Suppose that X and Y have the following joint probability mass function, in which the six central cells give the discrete joint probabilities f(x, y) of the six hypothetical realizations (x, y) \in S = \left\: X can take on three values (5, 6 and 7) while Y can take on two (8 and 9). Their means are \mu_X = 5(0.3) + 6(0.4) + 7(0.1 + 0.2) = 6 and \mu_Y = 8(0.4 + 0.1) + 9(0.3 + 0.2) = 8.5. Then, :\begin \operatorname(X, Y) = &\sigma_ = \sum_f(x, y) \left(x - \mu_X\right)\left(y - \mu_Y\right) \\ pt = &(0)(5 - 6)(8 - 8.5) + (0.4)(6 - 6)(8 - 8.5) + (0.1)(7 - 6)(8 - 8.5) + \\ pt &(0.3)(5 - 6)(9 - 8.5) + (0)(6 - 6)(9 - 8.5) + (0.2)(7 - 6)(9 - 8.5) \\ pt = & \; . \end


Properties


Covariance with itself

The
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
is a special case of the covariance in which the two variables are identical (that is, in which one variable always takes the same value as the other): :\operatorname(X, X) =\operatorname(X)\equiv\sigma^2(X)\equiv\sigma_X^2.


Covariance of linear combinations

If X, Y, W, and V are real-valued random variables and a,b,c,d are real-valued constants, then the following facts are a consequence of the definition of covariance: : \begin \operatorname(X, a) &= 0 \\ \operatorname(X, X) &= \operatorname(X) \\ \operatorname(X, Y) &= \operatorname(Y, X) \\ \operatorname(aX, bY) &= ab\, \operatorname(X, Y) \\ \operatorname(X+a, Y+b) &= \operatorname(X, Y) \\ \operatorname(aX+bY, cW+dV) &= ac\,\operatorname(X,W)+ad\,\operatorname(X,V)+bc\,\operatorname(Y,W)+bd\,\operatorname(Y,V) \end For a sequence X_1,\ldots,X_n of random variables in real-valued, and constants a_1,\ldots,a_n, we have :\operatorname\left(\sum_^n a_iX_i \right) = \sum_^n a_i^2\sigma^2(X_i) + 2\sum_ a_ia_j\operatorname(X_i,X_j) = \sum_


Hoeffding's covariance identity

A useful identity to compute the covariance between two random variables X, Y is the Hoeffding's covariance identity: :\operatorname(X, Y) = \int_\mathbb R \int_\mathbb R \left(F_(x, y) - F_X(x)F_Y(y)\right) \,dx \,dy where F_(x,y) is the joint cumulative distribution function of the random vector (X, Y) and F_X(x), F_Y(y) are the
marginals The Marginals, also called the "''Paddy Irish''" gang, was a New York street gang during the early 1900s which, under stevedore Thomas F. "Tanner" Smith, succeeded the longtime Hudson Dusters in their territory of New York's Lower West Side. Base ...
.


Uncorrelatedness and independence

Random variables whose covariance is zero are called
uncorrelated In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, there ...
. Similarly, the components of random vectors whose covariance matrix is zero in every entry outside the main diagonal are also called uncorrelated. If X and Y are independent random variables, then their covariance is zero. This follows because under independence, : \operatorname Y\operatorname \cdot \operatorname The converse, however, is not generally true. For example, let X be uniformly distributed in 1,1/math> and let Y=X^2. Clearly, X and Y are not independent, but : \begin \operatorname(X, Y) &= \operatorname\left(X, X^2\right) \\ &= \operatorname\left \cdot X^2\right- \operatorname \cdot \operatorname\left ^2\right\\ &= \operatorname\left ^3\right- \operatorname operatorname\left ^2\right\\ &= 0 - 0 \cdot \operatorname ^2\\ &= 0. \end In this case, the relationship between Y and X is non-linear, while correlation and covariance are measures of linear dependence between two random variables. This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness ''does'' imply independence.


Relationship to inner products

Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product: # bilinear: for constants a and b and random variables X,Y,Z, \operatorname(aX+bY,Z) = a \operatorname(X,Z) + b \operatorname(Y,Z) # symmetric: \operatorname(X,Y) = \operatorname(Y,X) # positive semi-definite: \sigma^2(X) = \operatorname(X,X) \ge 0 for all random variables X, and \operatorname(X,X) = 0 implies that X is constant
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0 ...
. In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L2 inner product of real-valued functions on the sample space. As a result, for random variables with finite variance, the inequality : , \operatorname(X, Y), \le \sqrt holds via the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was published by . The corresponding inequality f ...
. Proof: If \sigma^2(Y) = 0, then it holds trivially. Otherwise, let random variable : Z = X - \frac Y. Then we have : \begin 0 \le \sigma^2(Z) &= \operatorname\left( X - \frac Y,\; X - \frac Y \right) \\ 2pt &= \sigma^2(X) - \frac. \end


Calculating the sample covariance

The sample covariances among K variables based on N observations of each, drawn from an otherwise unobserved population, are given by the K \times K
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
\textstyle \overline = \left _\right/math> with the entries :q_ = \frac\sum_^N \left(X_ - \bar_j\right) \left(X_ - \bar_k\right), which is an estimate of the covariance between variable j and variable k. The sample mean and the sample covariance matrix are unbiased estimates of the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set. For a data set, the '' ari ...
and the covariance matrix of the random vector \textstyle \mathbf, a vector whose ''j''th element (j = 1,\, \ldots,\, K) is one of the random variables. The reason the sample covariance matrix has \textstyle N-1 in the denominator rather than \textstyle N is essentially that the population mean \operatorname(\mathbf) is not known and is replaced by the sample mean \mathbf. If the population mean \operatorname(\mathbf) is known, the analogous unbiased estimate is given by : q_ = \frac \sum_^N \left(X_ - \operatorname\left(X_j\right)\right) \left(X_ - \operatorname\left(X_k\right)\right).


Generalizations


Auto-covariance matrix of real random vectors

For a vector \mathbf = \beginX_1 & X_2 & \dots & X_m\end^\mathrm of m jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as the variance–covariance matrix or simply the covariance matrix) \operatorname_ (also denoted by \Sigma(\mathbf) or \operatorname(\mathbf, \mathbf)) is defined as :\begin \operatorname_\mathbf = \operatorname(\mathbf, \mathbf) &= \operatorname\left \mathbf - \operatorname[\mathbf (\mathbf - \operatorname[\mathbf">mathbf.html" ;"title="\mathbf - \operatorname[\mathbf">\mathbf - \operatorname[\mathbf (\mathbf - \operatorname[\mathbf^\mathrm\right] \\ &= \operatorname\left mathbf^\mathrm\right- \operatorname[\mathbf]\operatorname[\mathbf]^\mathrm. \end Let \mathbf be a random vector with covariance matrix , and let be a matrix that can act on \mathbf on the left. The covariance matrix of the matrix-vector product is: :\begin \operatorname(\mathbf,\mathbf) &= \operatorname\left mathbf\mathbf^\mathrm\right- \operatorname mathbf\operatorname\left \mathbf\mathbf)^\mathrm\right\\ &= \operatorname\left mathbf^\mathrm\mathbf^\mathrm\right- \operatorname mathbf\operatorname\left mathbf^\mathrm\mathbf^\mathrm\right\\ &= \mathbf\operatorname\left mathbf^\mathrm\rightmathbf^\mathrm - \mathbf\operatorname mathbf\operatorname\left mathbf^\mathrm\rightmathbf^\mathrm \\ &= \mathbf\left(\operatorname\left mathbf^\mathrm\right- \operatorname mathbf\operatorname\left mathbf^\mathrm\rightright)\mathbf^\mathrm \\ &= \mathbf\Sigma\mathbf^\mathrm. \end This is a direct result of the linearity of
expectation Expectation or Expectations may refer to: Science * Expectation (epistemic) * Expected value, in mathematical probability theory * Expectation value (quantum mechanics) * Expectation–maximization algorithm, in statistics Music * ''Expectation' ...
and is useful when applying a linear transformation, such as a whitening transformation, to a vector.


Cross-covariance matrix of real random vectors

For real random vectors \mathbf \in \mathbb^m and \mathbf \in \mathbb^n, the m \times n cross-covariance matrix is equal to where \mathbf^ is the transpose of the vector (or matrix) \mathbf. The (i,j)-th element of this matrix is equal to the covariance \operatorname(X_i,Y_j) between the -th scalar component of \mathbf and the -th scalar component of \mathbf. In particular, \operatorname(\mathbf,\mathbf) is the transpose of \operatorname(\mathbf,\mathbf).


Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space

More generally let H_1 = (H_1, \langle \,,\rangle_1) and H_2 = (H_2, \langle \,,\rangle_2), be
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natu ...
s over \mathbb or \mathbb with \langle \,, \rangle anti linear in the first variable, and let \mathbf, \mathbf be H_1 resp. H_2 valued random variables. Then the covariance of \mathbf and \mathbf is the
sesquilinear In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allow ...
form on H_1 \times H_2 (anti linear in the first variable) given by :\begin \operatorname_(h_1,h_2) = \operatorname(\mathbf,\mathbf)(h_1,h_2) &= \operatorname\left langle h_1,(\mathbf - \operatorname[\mathbf\rangle_1\langle(\mathbf - \operatorname[\mathbf">mathbf.html" ;"title="langle h_1,(\mathbf - \operatorname[\mathbf">langle h_1,(\mathbf - \operatorname[\mathbf\rangle_1\langle(\mathbf - \operatorname[\mathbf, h_2 \rangle_2\right]\\ &= \operatorname[\langle h_1,\mathbf\rangle_1\langle\mathbf, h_2 \rangle_2] - \operatorname[\langle h,\mathbf \rangle_1]\operatorname[\langle \mathbf,h_2 \rangle_2] \\ &= \langle h_1, \operatorname\left \mathbf - \operatorname[\mathbf(\mathbf - \operatorname[\mathbf">mathbf.html" ;"title="\mathbf - \operatorname[\mathbf">\mathbf - \operatorname[\mathbf(\mathbf - \operatorname[\mathbf^\dagger \right]h_2 \rangle_1\\ &= \langle h_1, \left( \operatorname[\mathbf\mathbf^\dagger] - \operatorname[\mathbf]\operatorname[\mathbf]^\dagger \right) h_2 \rangle_1\\ \end


Numerical computation

When \operatorname Y\approx \operatorname operatorname /math>, the equation \operatorname(X, Y) = \operatorname\left Y\right- \operatorname\left \right\operatorname\left \right/math> is prone to catastrophic cancellation if \operatorname\left Y\right/math> and \operatorname\left \right\operatorname\left \right/math> are not computed exactly and thus should be avoided in computer programs when the data has not been centered before. Numerically stable algorithms should be preferred in this case.


Comments

The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matric ...
(see linear dependence). When the covariance is normalized, one obtains the
Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficien ...
, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.


Applications


In genetics and molecular biology

Covariance is an important measure in
biology Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditar ...
. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary structures of
protein Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, respon ...
s, or of
RNA Ribonucleic acid (RNA) is a polymeric molecule essential in various biological roles in coding, decoding, regulation and expression of genes. RNA and deoxyribonucleic acid ( DNA) are nucleic acids. Along with lipids, proteins, and carbohydra ...
structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found in noncoding RNA (such as
microRNA MicroRNA (miRNA) are small, single-stranded, non-coding RNA molecules containing 21 to 23 nucleotides. Found in plants, animals and some viruses, miRNAs are involved in RNA silencing and post-transcriptional regulation of gene expression. m ...
), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close relatives as well as inference on estimation of heritability of complex traits. In the theory of
evolution Evolution is change in the heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Variation ...
and
natural selection Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Cha ...
, the Price equation describes how a genetic trait changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the proportion of genes within each new generation of a population. The Price equation was derived by George R. Price, to re-derive W.D. Hamilton's work on kin selection. Examples of the Price equation have been constructed for various evolutionary cases.


In financial economics

Covariances play a key role in
financial economics Financial economics, also known as finance, is the branch of economics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear on ''both sides'' of a trade". William F. Sharpe"Financia ...
, especially in modern portfolio theory and in the
capital asset pricing model In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio. The model takes into ac ...
. Covariances among various assets' returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification.


In meteorological and oceanographic data assimilation

The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known as data assimilation. The 'forecast error covariance matrix' is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The 'observation error covariance matrix' is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between measurements (off the diagonal). This is an example of its widespread application to Kalman filtering and more general state estimation for time-varying systems.


In micrometeorology

The eddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes.


In signal processing

The covariance matrix is used to capture the spectral variability of a signal.


In statistics and image processing

The covariance matrix is used in principal component analysis to reduce feature dimensionality in data preprocessing.


See also

* Algorithms for calculating covariance * Analysis of covariance * Autocovariance * Covariance function *
Covariance operator In probability theory, for a probability measure P on a Hilbert space ''H'' with inner product \langle \cdot,\cdot\rangle , the covariance of P is the bilinear form Cov: ''H'' × ''H'' → R given by :\mathrm(x, y) ...
*
Distance covariance In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is ...
, or Brownian covariance. * Law of total covariance * Propagation of uncertainty


References

{{statistics Covariance and correlation Algebra of random variables