In
machine learning, support vector machines (SVMs, also support vector networks
) are
supervised learning models with associated learning
algorithms that analyze data for
classification Classification is a process related to categorization, the process in which ideas and objects are recognized, differentiated and understood.
Classification is the grouping of related facts into classes.
It may also refer to:
Business, organizat ...
and
regression analysis. Developed at
AT&T Bell Laboratories
Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984),
then AT&T Bell Laboratories (1984–1996)
and Bell Labs Innovations (1996–2007),
is an American industrial research and scientific development company owned by mult ...
by
Vladimir Vapnik with colleagues (Boser et al., 1992,
Guyon Guyon is a French surname.
Geographical distribution
As of 2014, 85.7% of all known bearers of the surname ''Guyon'' were residents of France (frequency 1:4,367), 4.8% of the United States (1:427,011), 2.4% of Canada (1:87,614), 1.7% of the Philipp ...
et al., 1993,
Cortes and
Vapnik, 1995,
Vapnik et al., 1997) SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or
VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-
probabilistic
Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and ...
binary linear classifier (although methods such as
Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
In addition to performing
linear classification
In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the v ...
, SVMs can efficiently perform a non-linear classification using what is called the
kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
When data are unlabelled, supervised learning is not possible, and an
unsupervised learning approach is required, which attempts to find natural
clustering of the data to groups, and then map new data to these formed groups. The support vector clustering
algorithm, created by
Hava Siegelmann and
Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data.
Motivation
Classifying data is a common task in
machine learning.
Suppose some given data points each belong to one of two classes, and the goal is to decide which class a ''new''
data point will be in. In the case of support vector machines, a data point is viewed as a
-dimensional vector (a list of
numbers), and we want to know whether we can separate such points with a
-dimensional
hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyper ...
. This is called a
linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or
margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the ''
maximum-margin hyperplane'' and the linear classifier it defines is known as a ''maximum-
margin classifier''; or equivalently, the ''
perceptron of optimal stability''.
More formally, a support vector machine constructs a
hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyper ...
or set of hyperplanes in a high or infinite-dimensional space, which can be used for
classification Classification is a process related to categorization, the process in which ideas and objects are recognized, differentiated and understood.
Classification is the grouping of related facts into classes.
It may also refer to:
Business, organizat ...
,
regression
Regression or regressions may refer to:
Science
* Marine regression, coastal advance due to falling sea level, the opposite of marine transgression
* Regression (medicine), a characteristic of diseases to express lighter symptoms or less extent ( ...
, or other tasks like outliers detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower the
generalization error
For supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error or the risk) is a measure of how accurately an algorithm is able to predict outcome values for pre ...
of the classifier.
Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are not
linearly separable in that space. For this reason, it was proposed
that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that
dot products of pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of a
kernel function selected to suit the problem. The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parameters
of images of
feature vectors
that occur in the data base. With this choice of a hyperplane, the points
in the
feature space that are mapped into the hyperplane are defined by the relation
Note that if
becomes small as
grows further away from
, each term in the sum measures the degree of closeness of the test point
to the corresponding data base point
. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of points
mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space.
Applications
SVMs can be used to solve various real-world problems:
* SVMs are helpful in
text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and
transductive settings. Some methods for
shallow semantic parsing In natural language processing, semantic role labeling (also called shallow semantic parsing or slot-filling) is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of a ...
are based on support vector machines.
*
Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback. This is also true for
image segmentation systems, including those using a modified version SVM that uses the privileged approach as suggested by Vapnik.
* Classification of satellite data like
SAR data using supervised SVM.
* Hand-written characters can be
recognized using SVM.
* The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly.
Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models. Support vector machine weights have also been used to interpret SVM models in the past. Posthoc interpretation of support vector machine models in order to identify features used by the model to make predictions is a relatively new area of research with special significance in the biological sciences.
History
The original SVM algorithm was invented by
Vladimir N. Vapnik
Vladimir Naumovich Vapnik (russian: Владимир Наумович Вапник; born 6 December 1936) is one of the main developers of the Vapnik–Chervonenkis theory of statistical learning, and the co-inventor of the support-vector machin ...
and
Alexey Ya. Chervonenkis in 1964. In 1992, Bernhard Boser,
Isabelle Guyon
Isabelle Guyon (; born August 15, 1961) is a French-born researcher in machine learning known for her work on support-vector machines, artificial neural networks and bioinformatics. She is a Chair Professor at the University of Paris-Saclay.
...
and
Vladimir Vapnik suggested a way to create nonlinear classifiers by applying the
kernel trick to maximum-margin hyperplanes.
The "soft margin" incarnation, as is commonly used in software packages, was proposed by
Corinna Cortes and Vapnik in 1993 and published in 1995.
Linear SVM
We are given a training dataset of
points of the form
where the
are either 1 or −1, each indicating the class to which the point
belongs. Each
is a
-dimensional
real vector. We want to find the "maximum-margin hyperplane" that divides the group of points
for which
from the group of points for which
, which is defined so that the distance between the hyperplane and the nearest point
from either group is maximized.
Any
hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyper ...
can be written as the set of points
satisfying
where
is the (not necessarily normalized)
normal vector
In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at ...
to the hyperplane. This is much like
Hesse normal form, except that
is not necessarily a unit vector. The parameter
determines the offset of the hyperplane from the origin along the normal vector
.
Hard-margin
If the training data is
linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations
:
(anything on or above this boundary is of one class, with label 1)
and
:
(anything on or below this boundary is of the other class, with label −1).
Geometrically, the distance between these two hyperplanes is
, so to maximize the distance between the planes we want to minimize
. The distance is computed using the
distance from a point to a plane equation. We also have to prevent data points from falling into the margin, we add the following constraint: for each
either
or
These constraints state that each data point must lie on the correct side of the margin.
This can be rewritten as
We can put this together to get the optimization problem:
The
and
that solve this problem determine our classifier,
where
is the
sign function
In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is an odd mathematical function that extracts the sign of a real number. In mathematical expressions the sign function is often represented as . To avoi ...
.
An important consequence of this geometric description is that the max-margin hyperplane is completely determined by those
that lie nearest to it. These
are called ''support vectors''.
Soft-margin
To extend SVM to cases in which the data are not linearly separable, the ''
hinge loss'' function is helpful
Note that
is the ''i''-th target (i.e., in this case, 1 or −1), and
is the ''i''-th output.
This function is zero if the constraint in is satisfied, in other words, if
lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin.
The goal of the optimization then is to minimize
where the parameter
determines the trade-off between increasing the margin size and ensuring that the
lie on the correct side of the margin. By deconstructing the hinge loss, this optimization problem can be massaged into the following:
Thus, for large values of
, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. (
is inversely related to
, e.g. in ''
LIBSVM''.)
Nonlinear Kernels
The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed a
linear classifier. However, in 1992,
Bernhard Boser
Bernhard is both a given name and a surname. Notable people with the name include:
Given name
*Bernhard of Saxe-Weimar (1604–1639), Duke of Saxe-Weimar
*Bernhard, Prince of Saxe-Meiningen (1901–1984), head of the House of Saxe-Meiningen 1946 ...
,
Isabelle Guyon
Isabelle Guyon (; born August 15, 1961) is a French-born researcher in machine learning known for her work on support-vector machines, artificial neural networks and bioinformatics. She is a Chair Professor at the University of Paris-Saclay.
...
and
Vladimir Vapnik suggested a way to create nonlinear classifiers by applying the
kernel trick (originally proposed by Aizerman et al.) to maximum-margin hyperplanes.
The resulting algorithm is formally similar, except that every
dot product is replaced by a nonlinear
kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed
feature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space.
It is noteworthy that working in a higher-dimensional feature space increases the
generalization error
For supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error or the risk) is a measure of how accurately an algorithm is able to predict outcome values for pre ...
of support vector machines, although given enough samples the algorithm still performs well.
Some common kernels include:
*
Polynomial (homogeneous):
. Particularly, when
, this becomes the linear kernel.
*
Polynomial (inhomogeneous):
.
* Gaussian
radial basis function:
for
. Sometimes parametrized using
.
*
Sigmoid function (
Hyperbolic tangent):
for some (not every)
and
.
The kernel is related to the transform
by the equation
. The value is also in the transformed space, with
. Dot products with for classification can again be computed by the kernel trick, i.e.
.
Computing the SVM classifier
Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form
We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value for
yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing to a
quadratic programming problem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed.
Primal
Minimizing can be rewritten as a constrained optimization problem with a differentiable objective function in the following way.
For each
we introduce a variable
. Note that
is the smallest nonnegative number satisfying
Thus we can rewrite the optimization problem as follows
This is called the ''primal'' problem.
Dual
By solving for the
Lagrangian dual of the above problem, one obtains the simplified problem
This is called the ''dual'' problem. Since the dual maximization problem is a quadratic function of the
subject to linear constraints, it is efficiently solvable by
quadratic programming algorithms.
Here, the variables
are defined such that
Moreover,
exactly when
lies on the correct side of the margin, and
when
lies on the margin's boundary. It follows that
can be written as a linear combination of the support vectors.
The offset,
, can be recovered by finding an
on the margin's boundary and solving
(Note that
since
.)
Kernel trick
Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data points
Moreover, we are given a kernel function
which satisfies
.
We know the classification vector
in the transformed space satisfies
where, the
are obtained by solving the optimization problem
The coefficients
can be solved for using quadratic programming, as before. Again, we can find some index
such that
, so that
lies on the boundary of the margin in the transformed space, and then solve
Finally,
Modern methods
Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high.
Sub-gradient descent
Sub-gradient descent algorithms for the SVM work directly with the expression
Note that
is a
convex function
In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
of
and
. As such, traditional
gradient descent (or
SGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function's
sub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale with
, the number of data points.
Coordinate descent
Coordinate descent algorithms for the SVM work from the dual problem
For each
, iteratively, the coefficient
is adjusted in the direction of
. Then, the resulting vector of coefficients
is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven.
Empirical risk minimization
The soft-margin support vector machine described above is an example of an
empirical risk minimization (ERM) algorithm for the ''
hinge loss''. Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties.
Risk minimization
In supervised learning, one is given a set of training examples
with labels
, and wishes to predict
given
. To do so one forms a
hypothesis,
, such that
is a "good" approximation of
. A "good" approximation is usually defined with the help of a ''
loss function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
,''
, which characterizes how bad
is as a prediction of
. We would then like to choose a hypothesis that minimizes the ''
expected risk:''
In most cases, we don't know the joint distribution of
outright. In these cases, a common strategy is to choose the hypothesis that minimizes the ''empirical risk:''
Under certain assumptions about the sequence of random variables
(for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk as
grows large. This approach is called ''empirical risk minimization,'' or ERM.
Regularization and stability
In order for the minimization problem to have a well-defined solution, we have to place constraints on the set
of hypotheses being considered. If
is a
normed space
In mathematics, a normed vector space or normed space is a vector space over the real or complex numbers, on which a norm is defined. A norm is the formalization and the generalization to real vector spaces of the intuitive notion of "length" i ...
(as is the case for SVM), a particularly effective technique is to consider only those hypotheses
for which
. This is equivalent to imposing a ''regularization penalty''
, and solving the new optimization problem
This approach is called ''
Tikhonov regularization.''
More generally,
can be some measure of the complexity of the hypothesis
, so that simpler hypotheses are preferred.
SVM and the hinge loss
Recall that the (soft-margin) SVM classifier
is chosen to minimize the following expression:
In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is the
hinge loss
From this perspective, SVM is closely related to other fundamental
classification algorithm {{Commons category, Classification algorithms
This category is about statistical classification algorithms. For more information, see Statistical classification.
Categorical data
Algorithms
In mathematics and computer science, an algori ...
s such as
regularized least-squares and
logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the
square-loss,
; logistic regression employs the
log-loss,
Target functions
The difference between the hinge loss and these other loss functions is best stated in terms of ''target functions -'' the function that minimizes expected risk for a given pair of random variables
.
In particular, let
denote
conditional on the event that
. In the classification setting, we have:
The optimal classifier is therefore:
For the square-loss, the target function is the conditional expectation function,