Fano Algorithm
   HOME

TheInfoList



OR:

Recognised by John Wozencraft, sequential decoding is a limited memory technique for decoding tree codes. Sequential decoding is mainly used as an approximate decoding algorithm for long constraint-length
convolutional code In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of t ...
s. This approach may not be as accurate as the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
but can save a substantial amount of computer memory. It was used to decode a convolutional code in 1968
Pioneer 9 Pioneer commonly refers to a settler who migrates to previously uninhabited or sparsely inhabited land. In the United States pioneer commonly refers to an American pioneer, a person in American history who migrated west to join in settling and dev ...
mission. Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree. There is a range of sequential decoding approaches based on the choice of metric and algorithm. Metrics include: *Fano metric *Zigangirov metric *Gallager metric Algorithms include: *Stack algorithm *Fano algorithm *Creeper algorithm


Fano metric

Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after
Robert Fano Roberto Mario "Robert" Fano (11 November 1917 – 13 July 2016) was an Italian-American computer scientist and professor of electrical engineering and computer science at the Massachusetts Institute of Technology. He became a student and working ...
) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory). For a binary symmetric channel (with error probability p) the Fano metric can be derived via
Bayes theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
. We are interested in following the most likely path P_i given an explored state of the tree X and a received sequence . Using the language of
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and ...
and
Bayes theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
we want to choose the maximum over i of: :\Pr(P_i, X,) \propto \Pr(, P_i,X)\Pr(P_i, X) We now introduce the following notation: *N to represent the maximum length of transmission in branches *b to represent the number of bits on a branch of the code (the denominator of the
code rate In telecommunication and information theory, the code rate (or information rateHuffman, W. Cary, and Pless, Vera, ''Fundamentals of Error-Correcting Codes'', Cambridge, 2003.) of a forward error correction code is the proportion of the data-str ...
, R). *d_i to represent the number of bit errors on path P_i (the
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
between the branch labels and the received sequence) *n_i to be the length of P_i in branches. We express the
likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
\Pr(, P_i,X) as p^ (1-p)^ 2^ (by using the binary symmetric channel likelihood for the first n_ib bits followed by a uniform prior over the remaining bits). We express the
prior Prior (or prioress) is an ecclesiastical title for a superior in some religious orders. The word is derived from the Latin for "earlier" or "first". Its earlier generic usage referred to any monastic superior. In abbeys, a prior would be l ...
\Pr(P_i, X) in terms of the number of branch choices one has made, n_i, and the number of branches from each node, 2^. Therefore: : \begin \Pr(P_i, X,) &\propto p^ (1-p)^ 2^ 2^ \\ &\propto p^ (1-p)^ 2^ 2^ \end We can equivalently maximise the log of this probability, i.e. : \begin &d_i \log_2 p + (n_ib-d_i) \log_2 (1-p) +n_ib-n_iRb \\= &d_i(\log_2 p +1-R) + (n_ib-d_i)(\log_2 (1-p) + 1-R) \end This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding \log_2 p +1-R for each non-matching bit and \log_2 (1-p) + 1-R for each matching bit.


Computational cutoff rate

For sequential decoding to a good choice of decoding algorithm, the number of states explored wants to remain small (otherwise an algorithm which deliberately explores all states, e.g. the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
, may be more suitable). For a particular noise level there is a maximum coding rate R_0 called the computational cutoff rate where there is a finite backtracking limit. For the binary symmetric channel: :R_0 = 1-\log_2(1+2\sqrt)


Algorithms


Stack algorithm

The simplest algorithm to describe is the "stack algorithm" in which the best N paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has N or more highly scoring paths above it; at this point the best path will drop off the stack and be no longer considered.


Fano algorithm

The famous Fano algorithm (named after
Robert Fano Roberto Mario "Robert" Fano (11 November 1917 – 13 July 2016) was an Italian-American computer scientist and professor of electrical engineering and computer science at the Massachusetts Institute of Technology. He became a student and working ...
) has a very low memory requirement and hence is suited to hardware implementations. This algorithm explores backwards and forward from a single point on the tree. # The Fano algorithm is a sequential decoding algorithm that does not require a stack. # The Fano algorithm can only operate over a code tree because it cannot examine path merging. # At each decoding stage, the Fano algorithm retains the information regarding three paths: the current path, its immediate predecessor path, and one of its successor paths. # Based on this information, the Fano algorithm can move from the current path to either its immediate predecessor path or the selected successor path; hence, no stack is required for queuing all examined paths. # The movement of the Fano algorithm is guided by a dynamic threshold that is an integer multiple of a fixed step size ¢. # Only the path whose path metric is no less than can be next visited. According to the algorithm, the process of codeword search continues to move forward along a code path, as long as the Fano metric along the code path remains non-decreasing. # Once all the successor path metrics are smaller than , the algorithm moves backward to the predecessor path if the predecessor path metric beats ; thereafter, threshold examination will be subsequently performed on another successor path of this revisited predecessor. # In case the predecessor path metric is also less than , the threshold is one-step lowered so that the algorithm is not trapped on the current path. # For the Fano algorithm, if a path is revisited, the presently examined dynamic threshold is always lower than the momentary dynamic threshold at the previous visit, guaranteeing that looping in the algorithm does not occur, and that the algorithm can ultimately reach a terminal node of the code tree, and stop.


References

* John Wozencraft and B. Reiffen, ''Sequential decoding'', *Rolf Johannesson and Kamil Sh. Zigangirov, ''Fundamentals of convolutional coding'' (chapter 6),


External links


Correction trees
- simulator of correction process using priority queue to choose maximum metric node (called weight) {{DEFAULTSORT:Sequential Decoding Error detection and correction