Canonical Correlation Analysis
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X''''n'') and ''Y'' = (''Y''1, ..., ''Y''''m'') of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s, and there are
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
s among the variables, then canonical-correlation analysis will find linear combinations of ''X'' and ''Y'' which have maximum correlation with each other. T. R. Knapp notes that "virtually all of the commonly encountered
parametric test Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters. Conversely a non-parametric model does not ass ...
s of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables." The method was first introduced by
Harold Hotelling Harold Hotelling (; September 29, 1895 – December 26, 1973) was an American mathematical statistician and an influential economic theorist, known for Hotelling's law, Hotelling's lemma, and Hotelling's rule in economics, as well as Hotelling's T ...
in 1936, although in the context of
angles between flats The concept of angles between lines in the plane and between pairs of two lines, two planes or a line and a plane in space can be generalized to arbitrary dimension. This generalization was first discussed by Jordan. For any pair of flats in a Eucl ...
the mathematical concept was published by Jordan in 1875.


Definition

Given two
column vectors In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
X = (x_1, \dots, x_n)^T and Y = (y_1, \dots, y_m)^T of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s with
finite Finite is the opposite of infinite. It may refer to: * Finite number (disambiguation) * Finite set, a set whose cardinality (number of elements) is some natural number * Finite verb, a verb form that has a subject, usually being inflected or marked ...
second moments, one may define the
cross-covariance In probability and statistics, given two stochastic processes \left\ and \left\, the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation \operatorname E for the ...
\Sigma _ = \operatorname(X, Y) to be the n \times m
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
whose (i, j) entry is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the les ...
\operatorname(x_i, y_j). In practice, we would estimate the covariance matrix based on sampled data from X and Y (i.e. from a pair of data matrices). Canonical-correlation analysis seeks vectors a (a \in\mathbb R^n) and b (b \in\mathbb R^m) such that the random variables a^T X and b^T Y maximize the
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
\rho = \operatorname(a^T X, b^T Y). The (scalar) random variables U = a^T X and V = b^T Y are the ''first pair of canonical variables''. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the ''second pair of canonical variables''. This procedure may be continued up to \min\ times. : (a',b') = \underset\operatorname \operatorname(a^T X, b^T Y)


Computation


Derivation

Let \Sigma _ be the
cross-covariance matrix In probability theory and statistics, a cross-covariance matrix is a matrix whose element in the ''i'', ''j'' position is the covariance between the ''i''-th element of a random vector and ''j''-th element of another random vector. A random vect ...
for any pair of (vector-shaped) random variables X and Y. The target function to maximize is : \rho = \frac. The first step is to define a
change of basis In mathematics, an ordered basis of a vector space of finite dimension allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of scalars called coordinates. If two different bases are considere ...
and define : c = \Sigma _ ^ a, : d = \Sigma _ ^ b. And thus we have : \rho = \frac. By the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was published by . The corresponding inequality fo ...
, we have : \left(c^T \Sigma _ ^ \Sigma _ \Sigma _ ^ \right) (d) \leq \left(c^T \Sigma _ ^ \Sigma _ \Sigma _ ^ \Sigma _ ^ \Sigma _ \Sigma _ ^ c \right)^ \left(d^T d \right)^, : \rho \leq \frac. There is equality if the vectors d and \Sigma_^ \Sigma_ \Sigma_^ c are collinear. In addition, the maximum of correlation is attained if c is the
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
with the maximum eigenvalue for the matrix \Sigma_^ \Sigma_ \Sigma_^ \Sigma_ \Sigma_^ (see
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix ''M'' and nonzero vector ''x'' is defined as: R(M,x) = . For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the con ...
). The subsequent pairs are found by using
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices. Another way of viewing this computation is that c and d are the left and right singular vectors of the correlation matrix of X and Y corresponding to the highest singular value.


Solution

The solution is therefore: * c is an eigenvector of \Sigma_^ \Sigma_ \Sigma_^ \Sigma_ \Sigma_^ * d is proportional to \Sigma _^ \Sigma_ \Sigma_^ c Reciprocally, there is also: * d is an eigenvector of \Sigma_^ \Sigma_ \Sigma_^ \Sigma_ \Sigma_^ * c is proportional to \Sigma_^ \Sigma_ \Sigma_^ d Reversing the change of coordinates, we have that * a is an eigenvector of \Sigma_^ \Sigma_ \Sigma_^ \Sigma_, * b is proportional to \Sigma_^ \Sigma_ a; * b is an eigenvector of \Sigma _^ \Sigma_ \Sigma_^ \Sigma_, * a is proportional to \Sigma_^ \Sigma_ b. The canonical variables are defined by: :U = c^T \Sigma_^ X = a^T X :V = d^T \Sigma_^ Y = b^T Y


Implementation

CCA can be computed using
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
on a correlation matrix. It is available as a function in *
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation ...
a
canoncorralso
in
Octave In music, an octave ( la, octavus: eighth) or perfect octave (sometimes called the diapason) is the interval between one musical pitch and another with double its frequency. The octave relationship is a natural phenomenon that has been refer ...
) * R as the standard functio
cancor
and several other packages, includin

an

for statistical hypothesis testing in canonical correlation analysis. * SAS a
proc cancorr
*
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
in the library
scikit-learn scikit-learn (formerly scikits.learn and also known as sklearn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support-vector m ...
, a
Cross decomposition
and in
statsmodels Statsmodels is a Python package that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available f ...
, a
CanCorr
*
SPSS SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. C ...
as macro CanCorr shipped with the main software *
Julia (programming language) Julia is a high-level, dynamic programming language. Its features are well suited for numerical analysis and computational science. Distinctive aspects of Julia's design include a type system with parametric polymorphism in a dynamic programmi ...
in th
MultivariateStats.jl
package. CCA computation using
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
on a correlation matrix is related to the cosine of the
angles between flats The concept of angles between lines in the plane and between pairs of two lines, two planes or a line and a plane in space can be generalized to arbitrary dimension. This generalization was first discussed by Jordan. For any pair of flats in a Eucl ...
. The cosine function is
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite
precision Precision, precise or precisely may refer to: Science, and technology, and mathematics Mathematics and computing (general) * Accuracy and precision, measurement deviation from true value and its scatter * Significant figures, the number of digit ...
computer arithmetic In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numb ...
. To fix this trouble, alternative algorithms are available in *
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal ...
a
linear-algebra function subspace_angles
*
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation ...
a
FileExchange function subspacea


Hypothesis testing

Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row i is zero implies all further correlations are also zero. If we have p independent observations in a sample and \widehat_i is the estimated correlation for i = 1,\dots, \min\. For the ith row, the test statistic is: :\chi^2 = - \left( p - 1 - \frac(m + n + 1)\right) \ln \prod_^ (1 - \widehat_j^2), which is asymptotically distributed as a chi-squared with (m - i + 1)(n - i + 1)
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
for large p. Since all the correlations from \min\ to p are logically zero (and estimated that way also) the product for the terms after this point is irrelevant. Note that in the small sample size limit with p < n + m then we are guaranteed that the top m + n - p correlations will be identically 1 and hence the test is meaningless.


Practical uses

A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common among the two sets. For example, in psychological testing, one could take two well established multidimensional
personality tests A personality test is a method of assessing human personality constructs. Most personality assessment instruments (despite being loosely referred to as "personality tests") are in fact introspective (i.e., subjective) self-report questionnaire ( ...
such as the
Minnesota Multiphasic Personality Inventory The Minnesota Multiphasic Personality Inventory (MMPI) is a standardized psychometric test of adult personality and psychopathology. Psychologists and other mental health professionals use various versions of the MMPI to help develop treatment ...
(MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that an
extraversion The traits of extraversion (also spelled extroversion Retrieved 2018-02-21.) and introversion are a central dimension in some human personality theories. The terms ''introversion'' and ''extraversion'' were introduced into psychology by Carl J ...
or
neuroticism In the study of psychology, neuroticism has been considered a fundamental personality trait. For example, in the Big Five approach to personality trait theory, individuals with high scores for neuroticism are more likely than average to be moody ...
dimension accounted for a substantial amount of shared variance between the two tests. One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model. Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.


Examples

Let X = x_1 with zero
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
, i.e., \operatorname(X)=0. # If Y = X, i.e., X and Y are perfectly correlated, then, e.g., a=1 and b=1, so that the first (and only in this example) pair of canonical variables is U = X and V = Y =X. # If Y = -X, i.e., X and Y are perfectly anticorrelated, then, e.g., a=1 and b=-1, so that the first (and only in this example) pair of canonical variables is U = X and V = -Y =X. We notice that in both cases U =V, which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly.


Connection to principal angles

Assuming that X = (x_1, \dots, x_n)^T and Y = (y_1, \dots, y_m)^T have zero
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
s, i.e., \operatorname(X)=\operatorname(Y)=0, their
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the les ...
matrices \Sigma _ =\operatorname(X,X) = \operatorname X^T/math> and \Sigma _ =\operatorname(Y,Y) = \operatorname Y^T/math> can be viewed as Gram matrices in an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
for the entries of X and Y, correspondingly. In this interpretation, the random variables, entries x_i of X and y_j of Y are treated as elements of a vector space with an inner product given by the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the les ...
\operatorname(x_i, y_j); see Covariance#Relationship to inner products. The definition of the canonical variables U and V is then equivalent to the definition of principal vectors for the pair of subspaces spanned by the entries of X and Y with respect to this
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
. The canonical correlations \operatorname(U,V) is equal to the cosine of
principal angles The concept of angles between lines in the plane and between pairs of two lines, two planes or a line and a plane in space can be generalized to arbitrary dimension. This generalization was first discussed by Jordan. For any pair of flats in a Eucl ...
.


Whitening and probabilistic canonical correlation analysis

CCA can also be viewed as a special
whitening transformation A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they ar ...
where the random vectors X and Y are simultaneously transformed in such a way that the cross-correlation between the whitened vectors X^ and Y^ is diagonal. The canonical correlations are then interpreted as regression coefficients linking X^ and Y^ and may also be negative. The regression view of CCA also provides a way to construct a latent variable probabilistic generative model for CCA, with uncorrelated hidden variables representing shared and non-shared variability.


See also

*
Generalized canonical correlation In statistics, the generalized canonical correlation In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X''&n ...
*
RV coefficient In statistics, the RV coefficient is a multivariate generalization of the ''squared'' Pearson correlation coefficient (because the RV coefficient takes values between 0 and 1). It measures the closeness of two set of points that may each be represe ...
*
Angles between flats The concept of angles between lines in the plane and between pairs of two lines, two planes or a line and a plane in space can be generalized to arbitrary dimension. This generalization was first discussed by Jordan. For any pair of flats in a Eucl ...
*
Principal component analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
*
Linear discriminant analysis Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features ...
* Regularized canonical correlation analysis *
Singular-value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is relate ...
*
Partial least squares regression Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a ...


References


External links


Discriminant Correlation Analysis (DCA)
ref name="dca"> (
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation ...
) *
A note on the ordinal canonical-correlation analysis of two sets of ranking scores
(Also provides a FORTRAN program)- in Journal of Quantitative Economics 7(2), 2009, pp. 173–199
Representation-Constrained Canonical Correlation Analysis: A Hybridization of Canonical Correlation and Principal Component Analyses
(Also provides a FORTRAN program)- in Journal of Applied Economic Sciences 4(1), 2009, pp. 115–124 {{statistics, analysis, collapsed Covariance and correlation