Covariation (RNA)
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, covariance is a measure of the joint variability of two
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance therefore shows the tendency in the
linear relationship In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. A distinction must be made between (1) the covariance of two random variables, which is a
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction using a ...
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
that can be seen as a property of the
joint probability distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
, and (2) the
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of s ...
covariance, which in addition to serving as a descriptor of the sample, also serves as an
estimated Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is der ...
value of the population parameter.


Definition

For two jointly distributed real-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s X and Y with finite
second moment In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total ma ...
s, the covariance is defined as the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
(or mean) of the product of their deviations from their individual expected values: \operatorname(X, Y) = \operatorname where \operatorname /math> is the expected value of X, also known as the mean of X. The covariance is also sometimes denoted \sigma_ or \sigma(X,Y), in analogy to
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values: : \begin \operatorname(X, Y) &= \operatorname\left left(X - \operatorname\left[X\rightright) \left(Y - \operatorname\left[Y\right">\right.html" ;"title="left(X - \operatorname\left[X\right">left(X - \operatorname\left[X\rightright) \left(Y - \operatorname\left[Y\rightright)\right] \\ &= \operatorname\left[X Y - X \operatorname\left \right- \operatorname\left \rightY + \operatorname\left \right\operatorname\left \rightright] \\ &= \operatorname\left Y\right- \operatorname\left \right\operatorname\left \right- \operatorname\left \right\operatorname\left \right+ \operatorname\left \right\operatorname\left \right\\ &= \operatorname\left Y\right- \operatorname\left \right\operatorname\left \right \end but this equation is susceptible to
catastrophic cancellation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
(see the section on numerical computation below). The units of measurement of the covariance \operatorname(X, Y) are those of X times those of Y. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)


Definition for complex random variables

The covariance between two complex random variables Z, W is defined as :\operatorname(Z, W) = \operatorname\left Z - \operatorname[Z\overline\right">.html" ;"title="Z - \operatorname[Z">Z - \operatorname[Z\overline\right= \operatorname\left[Z\overline\right] - \operatorname[Z]\operatorname\left[\overline\right] Notice the complex conjugation of the second factor in the definition. A related ''pseudo-covariance'' can also be defined.


Discrete random variables

If the (real) random variable pair (X,Y) can take on the values (x_i,y_i) for i=1,\ldots,n, with equal probabilities p_i=1/n, then the covariance can be equivalently written in terms of the means \operatorname /math> and \operatorname /math> as :\operatorname (X,Y)=\frac\sum_^n (x_i-E(X))(y_i-E(Y)). It can also be equivalently expressed, without directly referring to the means, as : \operatorname(X,Y) = \frac \sum_^n \sum_^n \frac(x_i - x_j)(y_i - y_j) = \frac \sum_i \sum_ (x_i-x_j)(y_i - y_j). More generally, if there are n possible realizations of (X,Y), namely (x_i,y_i) but with possibly unequal probabilities p_i for i=1,\ldots,n, then the covariance is :\operatorname (X,Y)=\sum_^n p_i (x_i-E(X)) (y_i-E(Y)).


Example

Suppose that X and Y have the following joint probability mass function, in which the six central cells give the discrete joint probabilities f(x, y) of the six hypothetical realizations (x, y) \in S = \left\: X can take on three values (5, 6 and 7) while Y can take on two (8 and 9). Their means are \mu_X = 5(0.3) + 6(0.4) + 7(0.1 + 0.2) = 6 and \mu_Y = 8(0.4 + 0.1) + 9(0.3 + 0.2) = 8.5. Then, :\begin \operatorname(X, Y) = &\sigma_ = \sum_f(x, y) \left(x - \mu_X\right)\left(y - \mu_Y\right) \\ pt = &(0)(5 - 6)(8 - 8.5) + (0.4)(6 - 6)(8 - 8.5) + (0.1)(7 - 6)(8 - 8.5) + \\ pt &(0.3)(5 - 6)(9 - 8.5) + (0)(6 - 6)(9 - 8.5) + (0.2)(7 - 6)(9 - 8.5) \\ pt = & \; . \end


Properties


Covariance with itself

The
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
is a special case of the covariance in which the two variables are identical (that is, in which one variable always takes the same value as the other): :\operatorname(X, X) =\operatorname(X)\equiv\sigma^2(X)\equiv\sigma_X^2.


Covariance of linear combinations

If X, Y, W, and V are real-valued random variables and a,b,c,d are real-valued constants, then the following facts are a consequence of the definition of covariance: : \begin \operatorname(X, a) &= 0 \\ \operatorname(X, X) &= \operatorname(X) \\ \operatorname(X, Y) &= \operatorname(Y, X) \\ \operatorname(aX, bY) &= ab\, \operatorname(X, Y) \\ \operatorname(X+a, Y+b) &= \operatorname(X, Y) \\ \operatorname(aX+bY, cW+dV) &= ac\,\operatorname(X,W)+ad\,\operatorname(X,V)+bc\,\operatorname(Y,W)+bd\,\operatorname(Y,V) \end For a sequence X_1,\ldots,X_n of random variables in real-valued, and constants a_1,\ldots,a_n, we have :\operatorname\left(\sum_^n a_iX_i \right) = \sum_^n a_i^2\sigma^2(X_i) + 2\sum_ a_ia_j\operatorname(X_i,X_j) = \sum_


Hoeffding's covariance identity

A useful identity to compute the covariance between two random variables X, Y is the Hoeffding's covariance identity: :\operatorname(X, Y) = \int_\mathbb R \int_\mathbb R \left(F_(x, y) - F_X(x)F_Y(y)\right) \,dx \,dy where F_(x,y) is the joint cumulative distribution function of the random vector (X, Y) and F_X(x), F_Y(y) are the marginals.


Uncorrelatedness and independence

Random variables whose covariance is zero are called
uncorrelated In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, there ...
. Similarly, the components of random vectors whose
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
is zero in every entry outside the main diagonal are also called uncorrelated. If X and Y are independent random variables, then their covariance is zero. This follows because under independence, : \operatorname Y\operatorname \cdot \operatorname The converse, however, is not generally true. For example, let X be uniformly distributed in 1,1/math> and let Y=X^2. Clearly, X and Y are not independent, but : \begin \operatorname(X, Y) &= \operatorname\left(X, X^2\right) \\ &= \operatorname\left \cdot X^2\right- \operatorname \cdot \operatorname\left ^2\right\\ &= \operatorname\left ^3\right- \operatorname operatorname\left ^2\right\\ &= 0 - 0 \cdot \operatorname ^2\\ &= 0. \end In this case, the relationship between Y and X is non-linear, while correlation and covariance are measures of linear dependence between two random variables. This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness ''does'' imply independence.


Relationship to inner products

Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
: # bilinear: for constants a and b and random variables X,Y,Z, \operatorname(aX+bY,Z) = a \operatorname(X,Z) + b \operatorname(Y,Z) # symmetric: \operatorname(X,Y) = \operatorname(Y,X) # positive semi-definite: \sigma^2(X) = \operatorname(X,X) \ge 0 for all random variables X, and \operatorname(X,X) = 0 implies that X is constant
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
. In fact these properties imply that the covariance defines an inner product over the
quotient vector space In linear algebra, the quotient of a vector space ''V'' by a subspace ''N'' is a vector space obtained by "collapsing" ''N'' to zero. The space obtained is called a quotient space and is denoted ''V''/''N'' (read "''V'' mod ''N''" or "''V'' by ' ...
obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L2 inner product of real-valued functions on the sample space. As a result, for random variables with finite variance, the inequality : , \operatorname(X, Y), \le \sqrt holds via the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was published by . The corresponding inequality fo ...
. Proof: If \sigma^2(Y) = 0, then it holds trivially. Otherwise, let random variable : Z = X - \frac Y. Then we have : \begin 0 \le \sigma^2(Z) &= \operatorname\left( X - \frac Y,\; X - \frac Y \right) \\
2pt PT, Pt, or pt may refer to: Arts and entertainment * ''P.T.'' (video game), acronym for ''Playable Teaser'', a short video game released to promote the cancelled video game ''Silent Hills'' * Porcupine Tree, a British progressive rock group ...
&= \sigma^2(X) - \frac. \end


Calculating the sample covariance

The sample covariances among K variables based on N observations of each, drawn from an otherwise unobserved population, are given by the K \times K
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
\textstyle \overline = \left _\right/math> with the entries :q_ = \frac\sum_^N \left(X_ - \bar_j\right) \left(X_ - \bar_k\right), which is an estimate of the covariance between variable j and variable k. The sample mean and the sample covariance matrix are unbiased estimates of the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
and the covariance matrix of the random vector \textstyle \mathbf, a vector whose ''j''th element (j = 1,\, \ldots,\, K) is one of the random variables. The reason the sample covariance matrix has \textstyle N-1 in the denominator rather than \textstyle N is essentially that the population mean \operatorname(\mathbf) is not known and is replaced by the sample mean \mathbf. If the population mean \operatorname(\mathbf) is known, the analogous unbiased estimate is given by : q_ = \frac \sum_^N \left(X_ - \operatorname\left(X_j\right)\right) \left(X_ - \operatorname\left(X_k\right)\right).


Generalizations


Auto-covariance matrix of real random vectors

For a vector \mathbf = \beginX_1 & X_2 & \dots & X_m\end^\mathrm of m jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as the variance–covariance matrix or simply the covariance matrix) \operatorname_ (also denoted by \Sigma(\mathbf) or \operatorname(\mathbf, \mathbf)) is defined as :\begin \operatorname_\mathbf = \operatorname(\mathbf, \mathbf) &= \operatorname\left \mathbf - \operatorname[\mathbf (\mathbf - \operatorname[\mathbf">mathbf.html" ;"title="\mathbf - \operatorname[\mathbf">\mathbf - \operatorname[\mathbf (\mathbf - \operatorname[\mathbf^\mathrm\right] \\ &= \operatorname\left mathbf^\mathrm\right- \operatorname[\mathbf]\operatorname[\mathbf]^\mathrm. \end Let \mathbf be a random vector with covariance matrix , and let be a matrix that can act on \mathbf on the left. The covariance matrix of the matrix-vector product is: :\begin \operatorname(\mathbf,\mathbf) &= \operatorname\left mathbf\mathbf^\mathrm\right- \operatorname mathbf\operatorname\left \mathbf\mathbf)^\mathrm\right\\ &= \operatorname\left mathbf^\mathrm\mathbf^\mathrm\right- \operatorname mathbf\operatorname\left mathbf^\mathrm\mathbf^\mathrm\right\\ &= \mathbf\operatorname\left mathbf^\mathrm\rightmathbf^\mathrm - \mathbf\operatorname mathbf\operatorname\left mathbf^\mathrm\rightmathbf^\mathrm \\ &= \mathbf\left(\operatorname\left mathbf^\mathrm\right- \operatorname mathbf\operatorname\left mathbf^\mathrm\rightright)\mathbf^\mathrm \\ &= \mathbf\Sigma\mathbf^\mathrm. \end This is a direct result of the linearity of
expectation Expectation or Expectations may refer to: Science * Expectation (epistemic) * Expected value, in mathematical probability theory * Expectation value (quantum mechanics) * Expectation–maximization algorithm, in statistics Music * ''Expectation' ...
and is useful when applying a linear transformation, such as a
whitening transformation A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they ar ...
, to a vector.


Cross-covariance matrix of real random vectors

For real random vectors \mathbf \in \mathbb^m and \mathbf \in \mathbb^n, the m \times n cross-covariance matrix is equal to where \mathbf^ is the transpose of the vector (or matrix) \mathbf. The (i,j)-th element of this matrix is equal to the covariance \operatorname(X_i,Y_j) between the -th scalar component of \mathbf and the -th scalar component of \mathbf. In particular, \operatorname(\mathbf,\mathbf) is the transpose of \operatorname(\mathbf,\mathbf).


Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space

More generally let H_1 = (H_1, \langle \,,\rangle_1) and H_2 = (H_2, \langle \,,\rangle_2), be
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
s over \mathbb or \mathbb with \langle \,, \rangle anti linear in the first variable, and let \mathbf, \mathbf be H_1 resp. H_2 valued random variables. Then the covariance of \mathbf and \mathbf is the
sesquilinear In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows ...
form on H_1 \times H_2 (anti linear in the first variable) given by :\begin \operatorname_(h_1,h_2) = \operatorname(\mathbf,\mathbf)(h_1,h_2) &= \operatorname\left langle h_1,(\mathbf - \operatorname[\mathbf\rangle_1\langle(\mathbf - \operatorname[\mathbf">mathbf.html" ;"title="langle h_1,(\mathbf - \operatorname[\mathbf">langle h_1,(\mathbf - \operatorname[\mathbf\rangle_1\langle(\mathbf - \operatorname[\mathbf, h_2 \rangle_2\right]\\ &= \operatorname[\langle h_1,\mathbf\rangle_1\langle\mathbf, h_2 \rangle_2] - \operatorname[\langle h,\mathbf \rangle_1]\operatorname[\langle \mathbf,h_2 \rangle_2] \\ &= \langle h_1, \operatorname\left \mathbf - \operatorname[\mathbf(\mathbf - \operatorname[\mathbf">mathbf.html" ;"title="\mathbf - \operatorname[\mathbf">\mathbf - \operatorname[\mathbf(\mathbf - \operatorname[\mathbf^\dagger \right]h_2 \rangle_1\\ &= \langle h_1, \left( \operatorname[\mathbf\mathbf^\dagger] - \operatorname[\mathbf]\operatorname[\mathbf]^\dagger \right) h_2 \rangle_1\\ \end


Numerical computation

When \operatorname Y\approx \operatorname operatorname /math>, the equation \operatorname(X, Y) = \operatorname\left Y\right- \operatorname\left \right\operatorname\left \right/math> is prone to
catastrophic cancellation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
if \operatorname\left Y\right/math> and \operatorname\left \right\operatorname\left \right/math> are not computed exactly and thus should be avoided in computer programs when the data has not been centered before. Numerically stable algorithms should be preferred in this case.


Comments

The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. ...
(see
linear dependence In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
). When the covariance is normalized, one obtains the Pearson correlation coefficient, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.


Applications


In genetics and molecular biology

Covariance is an important measure in
biology Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary i ...
. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary structures of
protein Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, respo ...
s, or of
RNA Ribonucleic acid (RNA) is a polymeric molecule essential in various biological roles in coding, decoding, regulation and expression of genes. RNA and deoxyribonucleic acid ( DNA) are nucleic acids. Along with lipids, proteins, and carbohydra ...
structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found in
noncoding RNA A non-coding RNA (ncRNA) is a functional RNA molecule that is not translated into a protein. The DNA sequence from which a functional non-coding RNA is transcribed is often called an RNA gene. Abundant and functionally important types of non- ...
(such as microRNA), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close relatives as well as inference on estimation of heritability of complex traits. In the theory of
evolution Evolution is change in the heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Variation ...
and
natural selection Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charle ...
, the
Price equation In the theory of evolution and natural selection, the Price equation (also known as Price's equation or Price's theorem) describes how a trait or allele changes in frequency over time. The equation uses a covariance between a trait and fitness, ...
describes how a
genetic trait Genetics is the study of genes, genetic variation, and heredity in organisms.Hartl D, Jones E (2005) It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinian friar working in ...
changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the proportion of genes within each new generation of a population. The Price equation was derived by
George R. Price George Robert Price (October 6, 1922 – January 6, 1975) was an American population geneticist. Price is often noted for his formulation of the Price equation in 1967. Originally a physical chemist and later a science journalist, he moved ...
, to re-derive
W.D. Hamilton William Donald Hamilton (1 August 1936 – 7 March 2000) was a British evolutionary biologist, recognised as one of the most significant evolutionary theorists of the 20th century. Hamilton became known for his theoretical work expounding a ...
's work on
kin selection Kin selection is the evolutionary strategy that favours the reproductive success of an organism's relatives, even when at a cost to the organism's own survival and reproduction. Kin altruism can look like Altruism in animals, altruistic behavio ...
. Examples of the Price equation have been constructed for various evolutionary cases.


In financial economics

Covariances play a key role in
financial economics Financial economics, also known as finance, is the branch of economics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear on ''both sides'' of a trade".William F. Sharpe"Financial ...
, especially in modern portfolio theory and in the
capital asset pricing model In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio. The model takes into accou ...
. Covariances among various assets' returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of
diversification Diversification may refer to: Biology and agriculture * Genetic divergence, emergence of subpopulations that have accumulated independent genetic changes * Agricultural diversification involves the re-allocation of some of a farm's resources to n ...
.


In meteorological and oceanographic data assimilation

The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known as
data assimilation Data assimilation is a mathematical discipline that seeks to optimally combine theory (usually in the form of a numerical model) with observations. There may be a number of different goals sought – for example, to determine the optimal state es ...
. The 'forecast error covariance matrix' is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The 'observation error covariance matrix' is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between measurements (off the diagonal). This is an example of its widespread application to Kalman filtering and more general state estimation for time-varying systems.


In micrometeorology

The eddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes.


In signal processing

The covariance matrix is used to capture the spectral variability of a signal.


In statistics and image processing

The covariance matrix is used in
principal component analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
to reduce feature dimensionality in data preprocessing.


See also

* Algorithms for calculating covariance *
Analysis of covariance Analysis of covariance (ANCOVA) is a general linear model which blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable (DV) are equal across levels of a categorical independent variable (IV) often called a treat ...
*
Autocovariance In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process ...
*
Covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a doma ...
*
Covariance operator In probability theory, for a probability measure P on a Hilbert space ''H'' with inner product \langle \cdot,\cdot\rangle , the covariance of P is the bilinear form Cov: ''H'' × ''H'' → R given by :\mathrm(x, y) ...
*
Distance covariance In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is ze ...
, or Brownian covariance. *
Law of total covariance In probability theory, the law of total covariance, covariance decomposition formula, or conditional covariance formula states that if ''X'', ''Y'', and ''Z'' are random variables on the same probability space, and the covariance of ''X'' and ''Y'' ...
* Propagation of uncertainty


References

{{statistics Covariance and correlation Algebra of random variables