HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, single-linkage clustering is one of several methods of
hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into ...
. It is based on grouping clusters in bottom-up fashion (agglomerative clustering), at each step combining two clusters that contain the closest pair of elements not yet belonging to the same cluster as each other. A drawback of this method is that it tends to produce long thin clusters in which nearby elements of the same cluster have small distances, but elements at opposite ends of a cluster may be much farther from each other than two elements of other clusters. This may lead to difficulties in defining classes that could usefully subdivide the data.


Overview of agglomerative clustering methods

In the beginning of the agglomerative clustering process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters, until all elements end up being in the same cluster. At each step, the two clusters separated by the shortest distance are combined. The function used to determine the distance between two clusters, known as the ''linkage function'', is what differentiates the agglomerative clustering methods. In single-linkage clustering, the distance between two clusters is determined by a single pair of elements: those two elements (one in each cluster) that are closest to each other. The shortest of these pairwise distances that remain at any step causes the two clusters whose elements are involved to be merged. The method is also known as ''nearest neighbour clustering''. The result of the clustering can be visualized as a
dendrogram A dendrogram is a diagram representing a tree. This diagrammatic representation is frequently used in different contexts: * in hierarchical clustering, it illustrates the arrangement of the clusters produced by the corresponding analyses. ...
, which shows the sequence in which clusters were merged and the distance at which each merge took place. Mathematically, the linkage function – the distance ''D''(''X'',''Y'') between clusters ''X'' and ''Y'' – is described by the expression :D(X,Y)=\min_ d(x,y), where ''X'' and ''Y'' are any two sets of elements considered as clusters, and ''d''(''x'',''y'') denotes the distance between the two elements ''x'' and ''y''.


Naive algorithm

The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The N \times N proximity matrix D contains all distances d(i,j). The clusterings are assigned sequence numbers 0,1, \ldots, n-1 and L(k) is the level of the k-th clustering. A cluster with sequence number ''m'' is denoted (''m'') and the proximity between clusters (r) and (s) is denoted d r),(s)/math>. The single linkage algorithm is composed of the following steps: # Begin with the disjoint clustering having level L(0) = 0 and sequence number m=0. # Find the most similar pair of clusters in the current clustering, say pair (r), (s), according to d r),(s)= \min d i),(j)/math>where the minimum is over all pairs of clusters in the current clustering. # Increment the sequence number: m = m + 1. Merge clusters (r) and (s) into a single cluster to form the next clustering m. Set the level of this clustering to L(m) = d r),(s)/math> # Update the proximity matrix, D, by deleting the rows and columns corresponding to clusters (r) and (s) and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted (r,s) and old cluster (k) is defined as d r,s),(k)= \min \. # If all objects are in one cluster, stop. Else, go to step 2.


Working example

This working example is based on a JC69 genetic distance matrix computed from the
5S ribosomal RNA The 5S ribosomal RNA (5S rRNA) is an approximately 120 nucleotide-long ribosomal RNA molecule with a mass of 40 kDa. It is a structural and functional component of the large subunit of the ribosome in all domains of life (bacteria, archaea, and e ...
sequence alignment of five bacteria: ''
Bacillus subtilis ''Bacillus subtilis'', known also as the hay bacillus or grass bacillus, is a Gram-positive, catalase-positive bacterium, found in soil and the gastrointestinal tract of ruminants, humans and marine sponges. As a member of the genus ''Bacillu ...
'' (a), '' Bacillus stearothermophilus'' (b), ''
Lactobacillus ''Lactobacillus'' is a genus of Gram-positive, aerotolerant anaerobes or microaerophilic, rod-shaped, non-spore-forming bacteria. Until 2020, the genus ''Lactobacillus'' comprised over 260 phylogenetically, ecologically, and metabolically diver ...
viridescens'' (c), '' Acholeplasma modicum'' (d), and ''
Micrococcus luteus ''Micrococcus luteus'' is a Gram-positive, to Gram-variable, nonmotile, coccus, tetrad-arranging, pigmented, saprotrophic bacterium that belongs to the family Micrococcaceae. It is urease and catalase positive. An obligate aerobe, ''M. luteus' ...
'' (e).


First step

* First clustering Let us assume that we have five elements (a,b,c,d,e) and the following matrix D_1 of pairwise distances between them: In this example, D_1 (a,b)=17 is the lowest value of D_1, so we cluster elements and . * First branch length estimation Let denote the node to which and are now connected. Setting \delta(a,u)=\delta(b,u)=D_1(a,b)/2 ensures that elements and are equidistant from . This corresponds to the expectation of the
ultrametricity In mathematics, an ultrametric space is a metric space in which the triangle inequality is strengthened to d(x,z)\leq\max\left\. Sometimes the associated metric is also called a non-Archimedean metric or super-metric. Although some of the theorems ...
hypothesis. The branches joining and to then have lengths \delta(a,u)=\delta(b,u)=17/2=8.5 ('' see the final dendrogram'') * First distance matrix update We then proceed to update the initial proximity matrix D_1 into a new proximity matrix D_2 (see below), reduced in size by one row and one column because of the clustering of with . Bold values in D_2 correspond to the new distances, calculated by retaining the minimum distance between each element of the first cluster (a,b) and each of the remaining elements: :\begin D_2((a,b),c)&=&\min(D_1(a,c),D_1(b,c))&=&\min(21,30)&=&21 \\ D_2((a,b),d)&=&\min(D_1(a,d),D_1(b,d))&=&\min(31,34)&=&31 \\ D_2((a,b),e)&=&\min(D_1(a,e),D_1(b,e))&=&\min(23,21)&=&21 \end Italicized values in D_2 are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster.


Second step

* Second clustering We now reiterate the three previous actions, starting from the new distance matrix D_2 : Here, D_2 ((a,b),c)=21 and D_2 ((a,b),e)=21 are the lowest values of D_2, so we join cluster (a,b) with element and with element . * Second branch length estimation Let denote the node to which (a,b), and are now connected. Because of the ultrametricity constraint, the branches joining or to , and to , and also to are equal and have the following total length: :\delta(a,v)=\delta(b,v)=\delta(c,v)=\delta(e,v)=21/2=10.5 We deduce the missing branch length: :\delta(u,v)=\delta(c,v)-\delta(a,u)=\delta(c,v)-\delta(b,u)=10.5-8.5=2 ('' see the final dendrogram'') * Second distance matrix update We then proceed to update the D_2 matrix into a new distance matrix D_3 (see below), reduced in size by two rows and two columns because of the clustering of (a,b) with and with : :D_3(((a,b),c,e),d)=\min(D_2((a,b),d),D_2(c,d),D_2(e,d))=\min(31,28,43)=28


Final step

The final D_3 matrix is: So we join clusters ((a,b),c,e) and d. Let r denote the (root) node to which ((a,b),c,e) and d are now connected. The branches joining ((a,b),c,e) and d to r then have lengths: \delta(((a,b),c,e),r)=\delta(d,r)=28/2=14 We deduce the remaining branch length: \delta(v,r)=\delta(a,r)-\delta(a,v)=\delta(b,r)-\delta(b,v)=\delta(c,r)-\delta(c,v)=\delta(e,r)-\delta(e,v)=14-10.5=3.5


The single-linkage dendrogram

The dendrogram is now complete. It is ultrametric because all tips (a, b, c, e, and d) are equidistant from r : \delta(a,r)=\delta(b,r)=\delta(c,r)=\delta(e,r)=\delta(d,r)=14 The dendrogram is therefore rooted by r, its deepest node.


Other linkages

The naive algorithm for single linkage clustering is essentially the same as
Kruskal's algorithm Kruskal's algorithm finds a minimum spanning forest of an undirected edge-weighted graph. If the graph is connected, it finds a minimum spanning tree. (A minimum spanning tree of a connected graph is a subset of the edges that forms a tree that i ...
for
minimum spanning tree A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. T ...
s. However, in single linkage clustering, the order in which clusters are formed is important, while for minimum spanning trees what matters is the set of pairs of points that form distances chosen by the algorithm. Alternative linkage schemes include
complete linkage clustering Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all ...
, average linkage clustering (
UPGMA UPGMA (unweighted pair group method with arithmetic mean) is a simple agglomerative (bottom-up) hierarchical clustering method. The method is generally attributed to Sokal and Michener. The UPGMA method is similar to its ''weighted'' variant, the ...
and
WPGMA WPGMA (Weighted Pair Group Method with Arithmetic Mean) is a simple agglomerative (bottom-up) hierarchical clustering method, generally attributed to Sokal and Michener. The WPGMA method is similar to its ''unweighted'' variant, the UPGMA method ...
), and
Ward's method In statistics, Ward's method is a criterion applied in hierarchical cluster analysis. Ward's minimum variance method is a special case of the objective function approach originally presented by Joe H. Ward, Jr. Ward suggested a general agglomerat ...
. In the naive algorithm for agglomerative clustering, implementing a different linkage scheme may be accomplished simply by using a different formula to calculate inter-cluster distances in the algorithm. The formula that should be adjusted has been highlighted using bold text in the above algorithm description. However, more efficient algorithms such as the one described below do not generalize to all linkage schemes in the same way.


Faster algorithms

The naive algorithm for single-linkage clustering is easy to understand but slow, with time complexity O(n^3). In 1973, R. Sibson proposed an algorithm with time complexity O(n^2) and space complexity O(n) (both optimal) known as SLINK. The slink algorithm represents a clustering on a set of n numbered items by two functions. These functions are both determined by finding the smallest cluster C that contains both item i and at least one larger-numbered item. The first function, \pi, maps item i to the largest-numbered item in cluster C. The second function, \lambda, maps item i to the distance associated with the creation of cluster C. Storing these functions in two arrays that map each item number to its function value takes space O(n), and this information is sufficient to determine the clustering itself. As Sibson shows, when a new item is added to the set of items, the updated functions representing the new single-linkage clustering for the augmented set, represented in the same way, can be constructed from the old clustering in time O(n). The SLINK algorithm then loops over the items, one by one, adding them to the representation of the clustering. An alternative algorithm, running in the same optimal time and space bounds, is based on the equivalence between the naive algorithm and Kruskal's algorithm for minimum spanning trees. Instead of using Kruskal's algorithm, one can use
Prim's algorithm In computer science, Prim's algorithm (also known as Jarník's algorithm) is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every ve ...
, in a variation without binary heaps that takes time O(n^2) and space O(n) to construct the minimum spanning tree (but not the clustering) of the given items and distances. Then, applying Kruskal's algorithm to the sparse graph formed by the edges of the minimum spanning tree produces the clustering itself in an additional time O(n\log n) and space O(n)..


See also

*
Cluster analysis Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
*
Complete-linkage clustering Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all ...
*
Hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into ...
*
Molecular clock The molecular clock is a figurative term for a technique that uses the mutation rate of biomolecules to deduce the time in prehistory when two or more life forms diverged. The biomolecular data used for such calculations are usually nucleoti ...
*
Neighbor-joining In bioinformatics, neighbor joining is a bottom-up (agglomerative) clustering method for the creation of phylogenetic trees, created by Naruya Saitou and Masatoshi Nei in 1987. Usually based on DNA or protein sequence data, the algorithm requi ...
*
UPGMA UPGMA (unweighted pair group method with arithmetic mean) is a simple agglomerative (bottom-up) hierarchical clustering method. The method is generally attributed to Sokal and Michener. The UPGMA method is similar to its ''weighted'' variant, the ...
*
WPGMA WPGMA (Weighted Pair Group Method with Arithmetic Mean) is a simple agglomerative (bottom-up) hierarchical clustering method, generally attributed to Sokal and Michener. The WPGMA method is similar to its ''unweighted'' variant, the UPGMA method ...


References

{{Reflist


External links


Linkages used in Matlab
Cluster analysis algorithms