Structured KNN
   HOME

TheInfoList



OR:

Structured k-Nearest Neighbours is a
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
algorithm that generalizes the
k-Nearest Neighbors In statistics, the ''k''-nearest neighbors algorithm (''k''-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and reg ...
(kNN) classifier. Whereas the kNN classifier supports
binary classification Binary classification is the task of classifying the elements of a set into two groups (each called ''class'') on the basis of a classification rule. Typical binary classification problems include: * Medical testing to determine if a patient has c ...
,
multiclass classification In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary c ...
and
regression Regression or regressions may refer to: Science * Marine regression, coastal advance due to falling sea level, the opposite of marine transgression * Regression (medicine), a characteristic of diseases to express lighter symptoms or less extent ( ...
, the Structured kNN (SkNN) allows training of a classifier for general structured output labels. As an example, a sample instance might be a natural language sentence, and the output label is an annotated
parse tree A parse tree or parsing tree or derivation tree or concrete syntax tree is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term ''parse tree'' itself is used primarily in co ...
. Training a classifier consists of showing pairs of correct sample and output label pairs. After training, the structured kNN model allows one to predict for new sample instances the corresponding output label; that is, given a natural language sentence, the classifier can produce the most likely parse tree.


Training

As a training set SkNN accepts sequences of elements with defined class labels. Type of elements does not matter, the only condition is the existence of metric function that defines a distance between each pair of elements of a set. SkNN is based on idea of creating a
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discre ...
, each node of which represents class label. There is an edge between a pair of nodes iff there is a sequence of two elements in training set with corresponding classes. Thereby the first step of SkNN training is the construction of described graph from training sequences. There are two special nodes in the graph corresponding to an end and a beginning of sentences. If sequence starts with class `C`, the edge between node `START` and node `C` should be created. Like a regular kNN, the second part of the training of SkNN consists only of storing the elements of trained sequence in special way. Each element of training sequences is stored in node related to the class of previous element in sequence. Every first element is stored in node `START`.


Inference

Labelling of input sequences in SkNN consists in finding sequence of transitions in graph, starting from node `START`, which minimises overall cost of path. Each transition corresponds to a single element of input sequence and vice versa. As a result, label of element is determined as target node label of the transition. Cost of the path is defined as sum of all its transitions, and the cost of transition from node `A` to node `B` is a distance from current input sequence element to the nearest element of class `B`, stored in node `A`. Searching of optimal path may be performed using modified
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
. Unlike the original one, the modified algorithm instead of maximizing the product of probabilities minimizes the sum of the distances.


References

{{reflist, 100em


External links


Implementation examples
Structured prediction Machine learning algorithms