HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the mathematical concepts of covariance and correlation are very similar. Both describe the degree to which two
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s or sets of random variables tend to deviate from their
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
s in similar ways. If ''X'' and ''Y'' are two random variables, with means (expected values) ''μX'' and ''μY'' and
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
s ''σX'' and ''σY'', respectively, then their covariance and correlation are as follows: ; covariance :\text_ = \sigma_ = E X-\mu_X)\,(Y-\mu_Y)/math> ; correlation :\text_ = \rho_ = E X-\mu_X)\,(Y-\mu_Y)(\sigma_X \sigma_Y)\,, so that \rho_ = \sigma_ / (\sigma_X \sigma_Y) where ''E'' is the expected value operator. Notably, correlation is dimensionless while covariance is in units obtained by multiplying the units of the two variables. If ''Y'' always takes on the same values as ''X'', we have the covariance of a variable with itself (i.e. \sigma_), which is called the
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
and is more commonly denoted as \sigma_X^2, the square of the standard deviation. The ''correlation'' of a variable with itself is always 1 (except in the degenerate case where the two variances are zero because ''X'' always takes on the same single value, in which case the correlation does not exist since its computation would involve division by 0). More generally, the correlation between two variables is 1 (or –1) if one of them always takes on a value that is given exactly by a
linear function In mathematics, the term linear function refers to two distinct but related notions: * In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For di ...
of the other with respectively a positive (or negative) slope. Although the values of the theoretical covariances and correlations are linked in the above way, the probability distributions of sample estimates of these quantities are not linked in any simple way and they generally need to be treated separately.


Multiple random variables

With any number of random variables in excess of 1, the variables can be stacked into a random vector whose ''i'' th element is the ''i'' th random variable. Then the variances and covariances can be placed in a covariance matrix, in which the (''i'', ''j'') element is the covariance between the ''i'' th random variable and the ''j'' th one. Likewise, the correlations can be placed in a correlation matrix.


Time series analysis

In the case of a
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
which is stationary in the wide sense, both the means and variances are constant over time (E(''Xn+m'') = E(''Xn'') ='' μX'' and var(''Xn+m'') = var(''Xn'') and likewise for the variable ''Y''). In this case the cross-covariance and cross-correlation are functions of the time difference: ;
cross-covariance In probability and statistics, given two stochastic processes \left\ and \left\, the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation \operatorname E for th ...
:\sigma_(m)=E (X_n-\mu_X)\,(Y_-\mu_Y) ;
cross-correlation In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a ''sliding dot product'' or ''sliding inner-product''. It is commonly used f ...
:\rho_(m) =E (X_n-\mu_X)\,(Y_-\mu_Y)(\sigma_ \sigma_). If ''Y'' is the same variable as ''X'', the above expressions are called the ''autocovariance'' and ''autocorrelation'': ; autocovariance :\sigma_(m)=E (X_n-\mu_X)\,(X_-\mu_X) ; autocorrelation :\rho_(m) =E (X_n-\mu_X)\,(X_-\mu_X)(\sigma_^2).


References

{{Refimprove, date=August 2011