Wagner–Fischer Algorithm
   HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, the Wagner–Fischer algorithm is a
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
algorithm that computes the
edit distance In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to tr ...
between two strings of characters.


History

The Wagner–Fischer algorithm has a history of
multiple invention Multiple may refer to: Economics *Multiple finance, a method used to analyze stock prices *Multiples of the price-to-earnings ratio *Chain stores, are also referred to as 'Multiples' * Box office multiple, the ratio of a film's total gross to th ...
. Navarro lists the following inventors of it, with date of publication, and acknowledges that the list is incomplete: * Vintsyuk, 1968 * Needleman and Wunsch, 1970 * Sankoff, 1972 * Sellers, 1974 * Wagner and
Fischer Fischer is a German occupational surname, meaning fisherman. The name Fischer is the fourth most common German surname. The English version is Fisher. People with the surname A * Abraham Fischer (1850–1913) South African public official * Ad ...
, 1974 * Lowrance and Wagner, 1975


Calculating distance

The Wagner–Fischer algorithm computes edit distance based on the observation that if we reserve a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
to hold the edit distances between all
prefix A prefix is an affix which is placed before the Word stem, stem of a word. Adding it to the beginning of one word changes it into another word. For example, when the prefix ''un-'' is added to the word ''happy'', it creates the word ''unhappy'' ...
es of the first string and all prefixes of the second, then we can compute the values in the matrix by
flood fill Flood fill, also called seed fill, is a flooding algorithm that determines and alters the area connected to a given node in a multi-dimensional array with some matching attribute. It is used in the "bucket" fill tool of paint programs to fill conn ...
ing the matrix, and thus find the distance between the two full strings as the last value computed. A straightforward implementation, as
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
for a function ''Distance'' that takes two strings, ''s'' of length ''m'', and ''t'' of length ''n'', and returns the
Levenshtein distance In information theory, linguistics, and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-charact ...
between them, looks as follows. The input strings are one-indexed, while the matrix ''d'' is zero-indexed, and ..k/code> is a closed range. function Distance(char s ..m char t ..n: // for all i and j, d ,jwill hold the distance between // the first i characters of s and the first j characters of t // note that d has (m+1)*(n+1) values declare int d ..m, 0..n set each element in d to zero // source prefixes can be transformed into empty string by // dropping all characters for i from 1 to m: d , 0:= i // target prefixes can be reached from empty source prefix // by inserting every character for j from 1 to n: d
, j The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
:= j for j from 1 to n: for i from 1 to m: if s = t substitutionCost := 0 else: substitutionCost := 1 d
, j The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
:= minimum(d -1, j+ 1, // deletion d , j-1+ 1, // insertion d -1, j-1+ substitutionCost) // substitution return d , n
Two examples of the resulting matrix (hovering over an underlined number reveals the operation performed to get that number): The
invariant Invariant and invariance may refer to: Computer science * Invariant (computer science), an expression whose value doesn't change during program execution ** Loop invariant, a property of a program loop that is true before (and after) each iteratio ...
maintained throughout the algorithm is that we can transform the initial segment s ..i/code> into t ..j/code> using a minimum of d ,j/code> operations. At the end, the bottom-right element of the array contains the answer.


Proof of correctness

As mentioned earlier, the
invariant Invariant and invariance may refer to: Computer science * Invariant (computer science), an expression whose value doesn't change during program execution ** Loop invariant, a property of a program loop that is true before (and after) each iteratio ...
is that we can transform the initial segment s ..i/code> into t ..j/code> using a minimum of d ,j/code> operations. This invariant holds since: * It is initially true on row and column 0 because s ..i/code> can be transformed into the empty string t ..0/code> by simply dropping all i characters. Similarly, we can transform s ..0/code> to t ..j/code> by simply adding all j characters. * If s = t /code>, and we can transform s ..i-1/code> to t ..j-1/code> in k operations, then we can do the same to s ..i/code> and just leave the last character alone, giving k operations. * Otherwise, the distance is the minimum of the three possible ways to do the transformation: ** If we can transform s ..i/code> to t ..j-1/code> in k operations, then we can simply add t /code> afterwards to get t ..j/code> in k+1 operations (insertion). ** If we can transform s ..i-1/code> to t ..j/code> in k operations, then we can remove s /code> and then do the same transformation, for a total of k+1 operations (deletion). ** If we can transform s ..i-1/code> to t ..j-1/code> in k operations, then we can do the same to s ..i/code>, and exchange the original s /code> for t /code> afterwards, for a total of k+1 operations (substitution). * The operations required to transform s ..n/code> into t ..m/code> is of course the number required to transform all of s into all of t, and so d ,m/code> holds our result. This proof fails to validate that the number placed in d ,j/code> is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assume d ,j/code> is smaller than the minimum of the three, and use this to show one of the three is not minimal.


Possible modifications

Possible modifications to this algorithm include: * We can adapt the algorithm to use less space, ''O''(''m'') instead of ''O''(''mn''), since it only requires that the previous row and current row be stored at any one time. * We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always j. * We can normalize the distance to the interval ,1/code>. * If we are only interested in the distance if it is smaller than a threshold ''k'', then it suffices to compute a diagonal stripe of width ''2k+1'' in the matrix. In this way, the algorithm can be run in ''O''(''kl'') time, where ''l'' is the length of the shortest string. * We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted. * This algorithm parallelizes poorly, due to a large number of
data dependencies A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement. In compiler theory, the technique used to discover data dependencies among statements (or instructions) is ...
. However, all the cost values can be computed in parallel, and the algorithm can be adapted to perform the minimum function in phases to eliminate dependencies. * By examining diagonals instead of rows, and by using
lazy evaluation In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed (non-strict evaluation) and which also avoids repeated evaluations (sharing). The b ...
, we can find the Levenshtein distance in ''O''(''m'' (1 + ''d'')) time (where ''d'' is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.


Seller's variant for string search

By initializing the first row of the matrix with zeros, we obtain a variant of the Wagner–Fischer algorithm that can be used for fuzzy string search of a string in a text. This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position.Bruno Woltzenlogel Paleo
An approximate gazetteer for GATE based on levenshtein distance
Student Section of the European Summer School in Logic, Language and Information ( ESSLLI), 2007.
The resulting algorithm is by no means efficient, but was at the time of its publication (1980) one of the first algorithms that performed approximate search.


References

{{DEFAULTSORT:Wagner-Fischer algorithm Algorithms on strings String metrics