Kernel methods
   HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, kernel machines are a class of algorithms for pattern analysis, whose best known member is the
support-vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning, supervised Maximum-margin hyperplane, max-margin models with associated learning algorithms that analyze data for Statistical classification ...
(SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified ''feature map'': in contrast, kernel methods require only a user-specified ''kernel'', i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the
representer theorem For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer f^ of a regularized Empirical risk minimization, empirical risk functional defined over a reproducing kernel Hi ...
. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing. Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, ''implicit'' feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick". Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors. Algorithms capable of operating with kernels include the kernel perceptron, support-vector machines (SVM), Gaussian processes,
principal components analysis Principal component analysis (PCA) is a Linear map, linear dimensionality reduction technique with applications in exploratory data analysis, visualization and Data Preprocessing, data preprocessing. The data is linear map, linearly transformed ...
(PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others. Most kernel algorithms are based on convex optimization or eigenproblems and are statistically well-founded. Typically, their statistical properties are analyzed using
statistical learning theory Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on da ...
(for example, using Rademacher complexity).


Motivation and informal explanation

Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" the i-th training example (\mathbf_i, y_i) and learn for it a corresponding weight w_i. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of a similarity function k, called a kernel, between the unlabeled input \mathbf and each of the training inputs \mathbf_i. For instance, a kernelized binary classifier typically computes a weighted sum of similarities \hat = \sgn \sum_^n w_i y_i k(\mathbf_i, \mathbf), where * \hat \in \ is the kernelized binary classifier's predicted label for the unlabeled input \mathbf whose hidden true label y is of interest; * k \colon \mathcal \times \mathcal \to \mathbb is the kernel function that measures similarity between any pair of inputs \mathbf, \mathbf \in \mathcal; * the sum ranges over the labeled examples \_^n in the classifier's training set, with y_i \in \; * the w_i \in \mathbb are the weights for the training examples, as determined by the learning algorithm; * the sign function \sgn determines whether the predicted classification \hat comes out positive or negative. Kernel classifiers were described as early as the 1960s, with the invention of the kernel perceptron. They rose to great prominence with the popularity of the
support-vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning, supervised Maximum-margin hyperplane, max-margin models with associated learning algorithms that analyze data for Statistical classification ...
(SVM) in the 1990s, when the SVM was found to be competitive with neural networks on tasks such as handwriting recognition.


Mathematics: the kernel trick

The kernel trick avoids the explicit mapping that is needed to get linear learning algorithms to learn a nonlinear function or decision boundary. For all \mathbf and \mathbf in the input space \mathcal, certain functions k(\mathbf, \mathbf) can be expressed as an inner product in another space \mathcal. The function k \colon \mathcal \times \mathcal \to \mathbb is often referred to as a ''kernel'' or a '' kernel function''. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum or
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
. Certain problems in machine learning have more structure than an arbitrary weighting function k. The computation is made much simpler if the kernel can be written in the form of a "feature map" \varphi\colon \mathcal \to \mathcal which satisfies k(\mathbf, \mathbf) = \langle \varphi(\mathbf), \varphi(\mathbf) \rangle_\mathcal.The key restriction is that \langle \cdot, \cdot \rangle_\mathcal must be a proper inner product. On the other hand, an explicit representation for \varphi is not necessary, as long as \mathcal is an inner product space. The alternative follows from
Mercer's theorem In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most ...
: an implicitly defined function \varphi exists whenever the space \mathcal can be equipped with a suitable measure ensuring the function k satisfies Mercer's condition. Mercer's theorem is similar to a generalization of the result from linear algebra that associates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure the counting measure \mu(T) = , T, for all T \subset X , which counts the number of points inside the set T, then the integral in Mercer's theorem reduces to a summation \sum_^n\sum_^n k(\mathbf_i, \mathbf_j) c_i c_j \geq 0.If this summation holds for all finite sequences of points (\mathbf_1, \dotsc, \mathbf_n) in \mathcal and all choices of n real-valued coefficients (c_1, \dots, c_n) (cf. positive definite kernel), then the function k satisfies Mercer's condition. Some algorithms that depend on arbitrary relationships in the native space \mathcal would, in fact, have a linear interpretation in a different setting: the range space of \varphi. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to compute \varphi directly during computation, as is the case with
support-vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning, supervised Maximum-margin hyperplane, max-margin models with associated learning algorithms that analyze data for Statistical classification ...
s. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms. Theoretically, a Gram matrix \mathbf \in \mathbb^ with respect to \ (sometimes also called a "kernel matrix"), where K_ = k(\mathbf_i, \mathbf_j), must be positive semi-definite (PSD). Empirically, for machine learning heuristics, choices of a function k that do not satisfy Mercer's condition may still perform reasonably if k at least approximates the intuitive idea of similarity. Regardless of whether k is a Mercer kernel, k may still be referred to as a "kernel". If the kernel function k is also a
covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a dom ...
as used in Gaussian processes, then the Gram matrix \mathbf can also be called a covariance matrix.


Applications

Application areas of kernel methods are diverse and include geostatistics, kriging, inverse distance weighting, 3D reconstruction, bioinformatics,
cheminformatics Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "'' in silico''" techniques—in application to a range of descriptive and prescriptive ...
, information extraction and handwriting recognition.


Popular kernels

* Fisher kernel * Graph kernels * Kernel smoother * Polynomial kernel * Radial basis function kernel (RBF) * String kernels * Neural tangent kernel * Neural network Gaussian process (NNGP) kernel


See also

* Kernel methods for vector output * Kernel density estimation *
Representer theorem For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer f^ of a regularized Empirical risk minimization, empirical risk functional defined over a reproducing kernel Hi ...
* Similarity learning * Cover's theorem


References


Further reading

* * *


External links


Kernel-Machines Org
��community website
onlineprediction.net Kernel Methods Article
{{DEFAULTSORT:Kernel Methods Kernel methods for machine learning Geostatistics Classification algorithms