An eigenface ( ) is the name given to a set of
eigenvector
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by ...
s when used in the
computer vision
Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
problem of human
face recognition. The approach of using eigenfaces for
recognition was developed by Sirovich and Kirby and used by
Matthew Turk and
Alex Pentland in face classification.
The eigenvectors are derived from the
covariance matrix of the
probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
over the high-
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coo ...
al
vector space
In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.
History
The eigenface approach began with a search for a low-dimensional representation of face images. Sirovich and Kirby showed that
principal component analysis
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
The data is linearly transformed onto a new coordinate system such that th ...
could be used on a collection of face images to form a set of basis features.
These basis images, known as eigenpictures, could be linearly combined to reconstruct images in the original training set. If the training set consists of ''M'' images, principal component analysis could form a basis set of ''N'' images, where ''N < M''. The reconstruction error is reduced by increasing the number of eigenpictures; however, the number needed is always chosen less than ''M''. For example, if you need to generate a number of ''N'' eigenfaces for a training set of ''M'' face images, you can say that each face image can be made up of "proportions" of all the ''K'' "features" or eigenfaces: Face image
1 = (23% of E
1) + (2% of E
2) + (51% of E
3) + ... + (1% E
n).
In 1991 M. Turk and A. Pentland expanded these results and presented the eigenface method of face recognition.
In addition to designing a system for automated face recognition using eigenfaces, they showed a way of calculating the
eigenvectors
In linear algebra, an eigenvector ( ) or characteristic vector is a Vector (mathematics and physics), vector that has its direction (geometry), direction unchanged (or reversed) by a given linear map, linear transformation. More precisely, an e ...
of a
covariance matrix such that computers of the time could perform eigen-decomposition on a large number of face images. Face images usually occupy a high-dimensional space and conventional principal component analysis was intractable on such data sets. Turk and Pentland's paper demonstrated ways to extract the eigenvectors based on matrices sized by the number of images rather than the number of pixels.
Once established, the eigenface method was expanded to include methods of preprocessing to improve accuracy. Multiple manifold approaches were also used to build sets of eigenfaces for different subjects and different features, such as the eyes.
Generation
A set of eigenfaces can be generated by performing a mathematical process called
principal component analysis
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
The data is linearly transformed onto a new coordinate system such that th ...
(PCA) on a large set of images depicting different human faces. Informally, eigenfaces can be considered a set of "standardized face ingredients", derived from
statistical analysis
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properties of ...
of many pictures of faces. Any human face can be considered to be a combination of these standard faces. For example, one's face might be composed of the average face plus 10% from eigenface 1, 55% from eigenface 2, and even −3% from eigenface 3. Remarkably, it does not take many eigenfaces combined together to achieve a fair approximation of most faces. Also, because a person's face is not recorded by a
digital photograph
Digital photography uses cameras containing arrays of electronics, electronic photodetectors interfaced to an analog-to-digital converter (ADC) to produce images focused by a lens (optics), lens, as opposed to an exposure on photographic film. ...
, but instead as just a list of values (one value for each eigenface in the database used), much less space is taken for each person's face.
The eigenfaces that are created will appear as light and dark areas that are arranged in a specific pattern. This pattern is how different features of a face are singled out to be evaluated and scored. There will be a pattern to evaluate
symmetry
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is Invariant (mathematics), invariant und ...
, whether there is any style of facial hair, where the hairline is, or an evaluation of the size of the nose or mouth. Other eigenfaces have patterns that are less simple to identify, and the image of the eigenface may look very little like a face.
The technique used in creating eigenfaces and using them for recognition is also used outside of face recognition:
handwriting recognition
Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
,
lip reading,
voice recognition,
sign language
Sign languages (also known as signed languages) are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with #Non-manual elements, no ...
/hand
gestures
A gesture is a form of nonverbal communication or non-vocal communication in which visible bodily actions communicate particular messages, either in place of, or in conjunction with, speech. Gestures include movement of the hands, face, or othe ...
interpretation and
medical imaging
Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to revea ...
analysis. Therefore, some do not use the term eigenface, but prefer to use 'eigenimage'.
Practical implementation
To create a set of eigenfaces, one must:
# Prepare a training set of face images. The pictures constituting the training set should have been taken under the same lighting conditions, and must be normalized to have the eyes and mouths aligned across all images. They must also be all resampled to a common
pixel
In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a Raster graphics, raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, p ...
resolution (''r'' × ''c''). Each image is treated as one vector, simply by
concatenating the rows of pixels in the original image, resulting in a single column with ''r'' × ''c'' elements. For this implementation, it is assumed that all images of the training set are stored in a single
matrix
Matrix (: matrices or matrixes) or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the m ...
T, where each column of the matrix is an image.
# Subtract the
mean
A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
. The average image a has to be calculated and then subtracted from each original image in T.
# Calculate the
eigenvectors and eigenvalues of the
covariance matrix S. Each eigenvector has the same dimensionality (number of components) as the original images, and thus can itself be seen as an image. The eigenvectors of this covariance matrix are therefore called eigenfaces. They are the directions in which the images differ from the mean image. Usually this will be a computationally expensive step (if at all possible), but the practical applicability of eigenfaces stems from the possibility to compute the eigenvectors of S efficiently, without ever computing S explicitly, as detailed below.
# Choose the principal components. Sort the eigenvalues in descending order and arrange eigenvectors accordingly. The number of principal components ''k'' is determined arbitrarily by setting a threshold ε on the total variance. Total variance , = number of components, and
represents component eigenvalue.
# k is the smallest number that satisfies
These eigenfaces can now be used to represent both existing and new faces: we can project a new (mean-subtracted) image on the eigenfaces and thereby record how that new face differs from the mean face. The eigenvalues associated with each eigenface represent how much the images in the training set vary from the mean image in that direction. Information is lost by projecting the image on a subset of the eigenvectors, but losses are minimized by keeping those eigenfaces with the largest eigenvalues. For instance, working with a 100 × 100 image will produce 10,000 eigenvectors. In practical applications, most faces can typically be identified using a projection on between 100 and 150 eigenfaces, so that most of the 10,000 eigenvectors can be discarded.
Matlab example code
Here is an example of calculating eigenfaces with Extended Yale Face Database B. To evade computational and storage bottleneck, the face images are sampled down by a factor 4×4=16.
clear all;
close all;
load yalefaces
, w, n
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
= size(yalefaces);
d = h * w;
% vectorize images
x = reshape(yalefaces, n;
x = double(x);
% subtract mean
mean_matrix = mean(x, 2);
x = bsxfun(@minus, x, mean_matrix);
% calculate covariance
s = cov(x');
% obtain eigenvalue & eigenvector
, D
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
= eig(s);
eigval = diag(D);
% sort eigenvalues in descending order
eigval = eigval(end: - 1:1);
V = fliplr(V);
% show mean and 1st through 15th principal eigenvectors
figure, subplot(4, 4, 1)
imagesc(reshape(mean_matrix, , w
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
)
colormap gray
for i = 1:15
subplot(4, 4, i + 1)
imagesc(reshape(V(:, i), h, w))
end
Note that although the covariance matrix S generates many eigenfaces, only a fraction of those are needed to represent the majority of the faces. For example, to represent 95% of the total variation of all face images, only the first 43 eigenfaces are needed. To calculate this result, implement the following code:
% evaluate the number of principal components needed to represent 95% Total variance.
eigsum = sum(eigval);
csum = 0;
for i = 1:d
csum = csum + eigval(i);
tv = csum / eigsum;
if tv > 0.95
k95 = i;
break
end;
end;
Computing the eigenvectors
Performing PCA directly on the covariance matrix of the images is often computationally infeasible. If small images are used, say 100 × 100 pixels, each image is a point in a 10,000-dimensional space and the covariance matrix S is a matrix of 10,000 × 10,000 = 10
8 elements. However the
rank of the covariance matrix is limited by the number of training examples: if there are ''N'' training examples, there will be at most ''N'' − 1 eigenvectors with non-zero eigenvalues. If the number of training examples is smaller than the dimensionality of the images, the principal components can be computed more easily as follows.
Let T be the matrix of preprocessed training examples, where each column contains one mean-subtracted image. The covariance matrix can then be computed as S = TT
T and the eigenvector decomposition of S is given by
:
However TT
T is a large matrix, and if instead we take the eigenvalue decomposition of
:
then we notice that by pre-multiplying both sides of the equation with T, we obtain
:
Meaning that, if u
i is an eigenvector of T
TT, then v
i = Tu
i is an eigenvector of S. If we have a training set of 300 images of 100 × 100 pixels, the matrix T
TT is a 300 × 300 matrix, which is much more manageable than the 10,000 × 10,000 covariance matrix. Notice however that the resulting vectors v
i are not normalised; if normalisation is required it should be applied as an extra step.
Connection with SVD
Let denote the data matrix with column as the image vector with mean subtracted. Then,
:
Let the
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
(SVD) of be:
:
Then the eigenvalue decomposition for
is:
:
, where Λ=diag (eigenvalues of
)
Thus we can see easily that:
: The eigenfaces = the first
(
) columns of
associated with the nonzero singular values.
: The ith eigenvalue of
ith singular value of
Using SVD on data matrix , it is unnecessary to calculate the actual covariance matrix to get eigenfaces.
Use in facial recognition
Facial recognition was the motivation for the creation of eigenfaces. For this use, eigenfaces have advantages over other techniques available, such as the system's speed and efficiency. As eigenface is primarily a dimension reduction method, a system can represent many subjects with a relatively small set of data. As a face-recognition system it is also fairly invariant to large reductions in image sizing; however, it begins to fail considerably when the variation between the seen images and probe image is large.
To recognise faces, gallery images – those seen by the system – are saved as collections of weights describing the contribution each eigenface has to that image. When a new face is presented to the system for classification, its own weights are found by projecting the image onto the collection of eigenfaces. This provides a set of weights describing the probe face. These weights are then classified against all weights in the gallery set to find the closest match. A nearest-neighbour method is a simple approach for finding the
Euclidean distance
In mathematics, the Euclidean distance between two points in Euclidean space is the length of the line segment between them. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, and therefore is o ...
between two vectors, where the minimum can be classified as the closest subject.
Intuitively, the recognition process with the eigenface method is to project query images into the face-space spanned by eigenfaces calculated, and to find the closest match to a face class in that face-space.
;Pseudo code
:* Given input image vector
, the mean image vector from the database
, calculate the weight of the k-th eigenface as:
:*:Then form a weight vector