The structured support-vector machine is a
machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
algorithm that generalizes the
Support-Vector Machine
In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborator ...
(SVM) classifier. Whereas the SVM classifier supports
binary classification
Binary classification is the task of classifying the elements of a set into two groups (each called ''class'') on the basis of a classification rule. Typical binary classification problems include:
* Medical testing to determine if a patient ha ...
,
multiclass classification and
regression
Regression or regressions may refer to:
Science
* Marine regression, coastal advance due to falling sea level, the opposite of marine transgression
* Regression (medicine), a characteristic of diseases to express lighter symptoms or less extent ( ...
, the structured SVM allows training of a classifier for general
structured output labels.
As an example, a sample instance might be a natural language sentence, and the output label is an annotated
parse tree
A parse tree or parsing tree or derivation tree or concrete syntax tree is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term ''parse tree'' itself is used primarily in comp ...
. Training a classifier consists of showing pairs of correct sample and output label pairs. After training, the structured SVM model allows one to predict for new sample instances the corresponding output label; that is, given a natural language sentence, the classifier can produce the most likely parse tree.
Training
For a set of
training instances
,
from a sample space
and label space
, the structured SVM minimizes the following regularized risk function.
:
The function is convex in
because the maximum of a set of affine functions is convex. The function
measures a distance in label space and is an arbitrary function (not necessarily a
metric) satisfying
and
. The function
is a feature function, extracting some feature vector from a given sample and label. The design of this function depends very much on the application.
Because the regularized risk function above is non-differentiable, it is often reformulated in terms of a
quadratic program by introducing one slack variable
for each sample, each representing the value of the maximum. The standard structured SVM primal formulation is given as follows.
:
Inference
At test time, only a sample
is known, and a prediction function
maps it to a predicted label from the label space
. For structured SVMs, given the vector
obtained from training, the prediction function is the following.
:
Therefore, the maximizer over the label space is the predicted label. Solving for this maximizer is the so-called inference problem and similar to making a maximum a-posteriori (MAP) prediction in probabilistic models. Depending on the structure of the function
, solving for the maximizer can be a hard problem.
Separation
The above quadratic program involves a very large, possibly infinite number of linear inequality constraints. In general, the number of inequalities is too large to be optimized over explicitly. Instead the problem is solved by using
delayed constraint generation where only a finite and small subset of the constraints is used. Optimizing over a subset of the constraints enlarges the
feasible set and will yield a solution that provides a lower bound on the objective. To test whether the solution
violates constraints of the complete set inequalities, a separation problem needs to be solved. As the inequalities decompose over the samples, for each sample
the following problem needs to be solved.
:
The right hand side objective to be maximized is composed of the constant
and a term dependent on the variables optimized over, namely
. If the achieved right hand side objective is smaller or equal to zero, no violated constraints for this sample exist. If it is strictly larger than zero, the most violated constraint with respect to this sample has been identified. The problem is enlarged by this constraint and resolved. The process continues until no violated inequalities can be identified.
If the constants are dropped from the above problem, we obtain the following problem to be solved.
:
This problem looks very similar to the inference problem. The only difference is the addition of the term
. Most often, it is chosen such that it has a natural decomposition in label space. In that case, the influence of
can be encoded into the inference problem and solving for the most violating constraint is equivalent to solving the inference problem.
References
* Ioannis Tsochantaridis,
Thorsten Joachims, Thomas Hofmann and Yasemin Altun (2005)
Large Margin Methods for Structured and Interdependent Output Variables JMLR, Vol. 6, pages 1453-1484.
* Thomas Finley and Thorsten Joachims (2008)
Training Structural SVMs when Exact Inference is Intractable ICML 2008.
*
Sunita Sarawagi
Sunita Sarawagi is an Indian computer scientist known for her research in databases, data mining, and machine learning, including the use of natural language processing to extract structured data from text. She is Institute Chair Professor of Co ...
and Rahul Gupta (2008)
Accurate Max-margin Training for Structured Output Spaces ICML 2008.
* Gökhan BakIr, Ben Taskar, Thomas Hofmann, Bernhard Schölkopf, Alex Smola and SVN Vishwanathan (2007)
Predicting Structured Data MIT Press.
* Vojtěch Franc and Bogdan Savchynsky
Discriminative Learning of Max-Sum Classifiers Journal of Machine Learning Research, 9(Jan):67—104, 2008, Microtome Publishing
* Kevin Murph
Machine Learning, MIT Press
Structured prediction
Support vector machines