HOME

TheInfoList



OR:

FastICA is an efficient and popular algorithm for independent component analysis invented by Aapo Hyvärinen at Helsinki University of Technology. Like most ICA algorithms, FastICA seeks an orthogonal rotation of prewhitened data, through a fixed-point iteration scheme, that maximizes a measure of non-Gaussianity of the rotated components. Non-gaussianity serves as a proxy for
statistical independence Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
, which is a very strong condition and requires infinite data to verify. FastICA can also be alternatively derived as an approximative Newton iteration.


Algorithm


''Prewhitening'' the data

Let the \mathbf := (x_) \in \mathbb^ denote the input data matrix, M the number of columns corresponding with the number of samples of mixed signals and N the number of rows corresponding with the number of independent source signals. The input data matrix \mathbf must be ''prewhitened'', or centered and whitened, before applying the FastICA algorithm to it. *Centering the data entails demeaning each component of the input data \mathbf, that is,
x_ \leftarrow x_ - \frac \sum_ x_
:for each i =1,\ldots,N and j = 1, \ldots, M . After centering, each row of \mathbf has an
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of 0. *''Whitening'' the data requires a linear transformation \mathbf: \mathbb^ \to \mathbb^ of the centered data so that the components of \mathbf(\mathbf) are uncorrelated and have variance one. More precisely, if \mathbf is a centered data matrix, the covariance of \mathbf_ := \mathbf(\mathbf) is the (N \times N)-dimensional identity matrix, that is,
\mathrm\left \ = \mathbf_N
: A common method for whitening is by performing an eigenvalue decomposition on the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of the centered data \mathbf, E\left \ = \mathbf\mathbf\mathbf^T, where \mathbf is the matrix of eigenvectors and \mathbf is the diagonal matrix of eigenvalues. The whitened data matrix is defined thus by
\mathbf \leftarrow \mathbf^\mathbf^T\mathbf.


Single component extraction

The iterative algorithm finds the direction for the weight vector \mathbf \in \mathbb^N that maximizes a measure of non-Gaussianity of the projection \mathbf^T \mathbf, with \mathbf \in \mathbb^ denoting a prewhitened data matrix as described above. Note that \mathbf is a column vector. To measure non-Gaussianity, FastICA relies on a nonquadratic nonlinear function f(u), its first derivative g(u), and its second derivative g^(u). Hyvärinen states that the functions
f(u) = \log \cosh (u), \quad g(u) = \tanh (u), \quad \text \quad '(u) = 1-\tanh^2(u),
are useful for general purposes, while
f(u) = -e^, \quad g(u) = u e^, \quad \text \quad '(u) = (1-u^2) e^
may be highly robust. The steps for extracting the weight vector \mathbf for single component in FastICA are the following: # Randomize the initial weight vector \mathbf # Let \mathbf^+ \leftarrow E\left\ - E\left\\mathbf , where E\left\ means averaging over all column-vectors of matrix \mathbf # Let \mathbf \leftarrow \mathbf^+ / \, \mathbf^+\, # If not converged, go back to 2


Multiple component extraction

The single unit iterative algorithm estimates only one weight vector which extracts a single component. Estimating additional components that are mutually "independent" requires repeating the algorithm to obtain linearly independent projection vectors - note that the notion of independence here refers to maximizing non-Gaussianity in the estimated components. Hyvärinen provides several ways of extracting multiple components with the simplest being the following. Here, \mathbf is a column vector of 1's of dimension M. Algorithm FastICA :Input: C Number of desired components :Input: \mathbf \in \mathbb^ Prewhitened matrix, where each column represents an N-dimensional sample, where C <= N :Output: \mathbf \in \mathbb^ Un-mixing matrix where each column projects \mathbf onto independent component. :Output: \mathbf \in \mathbb^ Independent components matrix, with M columns representing a sample with C dimensions. for p in 1 to C: ''\mathbf \leftarrow Random vector of length N'' while \mathbf changes \mathbf \leftarrow \frac\mathbf g(\mathbf^T \mathbf)^T - \fracg'(\mathbf^T\mathbf)\mathbf \mathbf \mathbf \leftarrow \mathbf - \sum_^ (\mathbf^T\mathbf)\mathbf \mathbf \leftarrow \frac
output \mathbf \leftarrow \begin \mathbf, \dots, \mathbf \end
output \mathbf \leftarrow \mathbf\mathbf


See also

* Unsupervised learning * Machine learning * The
IT++ IT++ is a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. It is being developed by researchers in these areas and is widely used by researchers, both in the commu ...
library features a FastICA implementation in C++ * Infomax


References

{{Reflist


External links


FastICA in Python

FastICA package for Matlab or Octave


in R programming language
FastICA in Java
on SourceForge
FastICA in Java
in RapidMiner.
FastICA in Matlab


Factor analysis Computational statistics Machine learning algorithms