HOME

TheInfoList



OR:

In
coding theory Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied ...
, decoding is the process of translating received messages into
codewords In communication, a code word is an element of a standardized code or protocol. Each code word is assembled in accordance with the specific rules of the code and assigned a unique meaning. Code words are typically used for reasons of reliability ...
of a given
code In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communicati ...
. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a
noisy channel In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (dig ...
, such as a
binary symmetric channel A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "f ...
.


Notation

C \subset \mathbb_2^n is considered a
binary code A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, als ...
with the length n; x,y shall be elements of \mathbb_2^n; and d(x,y) is the distance between those elements.


Ideal observer decoding

One may be given the message x \in \mathbb_2^n, then ideal observer decoding generates the codeword y \in C. The process results in this solution: :\mathbb(y \mbox \mid x \mbox) For example, a person can choose the codeword y that is most likely to be received as the message x after transmission.


Decoding conventions

Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include: :# Request that the codeword be resent automatic repeat-request. :# Choose any random codeword from the set of most likely codewords which is nearer to that. :# If another code follows, mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them


Maximum likelihood decoding

Given a received vector x \in \mathbb_2^n
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stat ...
decoding picks a codeword y \in C that
maximize In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
s :\mathbb(x \mbox \mid y \mbox), that is, the codeword y that maximizes the probability that x was received, given that y was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding. In fact, by
Bayes Theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For exa ...
, : \begin \mathbb(x \mbox \mid y \mbox) & = \frac \\ & = \mathbb(y \mbox \mid x \mbox) \cdot \frac. \end Upon fixing \mathbb(x \mbox), x is restructured and \mathbb(y \mbox) is constant as all codewords are equally likely to be sent. Therefore, \mathbb(x \mbox \mid y \mbox) is maximised as a function of the variable y precisely when \mathbb(y \mbox\mid x \mbox ) is maximised, and the claim follows. As with ideal observer decoding, a convention must be agreed to for non-unique decoding. The maximum likelihood decoding problem can also be modeled as an
integer programming An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective ...
problem. The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying the
generalized distributive law The generalized distributive law (GDL) is a generalization of the distributive property which gives rise to a general message passing algorithm. It is a synthesis of the work of many authors in the information theory, digital communications, sign ...
.


Minimum distance decoding

Given a received codeword x \in \mathbb_2^n, minimum distance decoding picks a codeword y \in C to minimise the
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
: :d(x,y) = \# \ i.e. choose the codeword y that is as close as possible to x. Note that if the probability of error on a
discrete memoryless channel Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory *Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit *Discrete group, a g ...
p is strictly less than one half, then ''minimum distance decoding'' is equivalent to ''maximum likelihood decoding'', since if :d(x,y) = d,\, then: : \begin \mathbb(y \mbox \mid x \mbox) & = (1-p)^ \cdot p^d \\ & = (1-p)^n \cdot \left( \frac\right)^d \\ \end which (since ''p'' is less than one half) is maximised by minimising ''d''. Minimum distance decoding is also known as ''nearest neighbour decoding''. It can be assisted or automated by using a standard array. Minimum distance decoding is a reasonable decoding method when the following conditions are met: :#The probability p that an error occurs is independent of the position of the symbol. :#Errors are independent events an error at one position in the message does not affect other positions. These assumptions may be reasonable for transmissions over a
binary symmetric channel A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "f ...
. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords. As with other decoding methods, a convention must be agreed to for non-unique decoding.


Syndrome decoding

Syndrome decoding is a highly efficient method of decoding a linear code over a ''noisy channel'', i.e. one on which errors are made. In essence, syndrome decoding is ''minimum distance decoding'' using a reduced lookup table. This is allowed by the linearity of the code. Suppose that C\subset \mathbb_2^n is a linear code of length n and minimum distance d with parity-check matrix H. Then clearly C is capable of correcting up to :t = \left\lfloor\frac\right\rfloor errors made by the channel (since if no more than t errors are made then minimum distance decoding will still correctly decode the incorrectly transmitted codeword). Now suppose that a codeword x \in \mathbb_2^n is sent over the channel and the error pattern e \in \mathbb_2^n occurs. Then z=x+e is received. Ordinary minimum distance decoding would lookup the vector z in a table of size , C, for the nearest match - i.e. an element (not necessarily unique) c \in C with :d(c,z) \leq d(y,z) for all y \in C. Syndrome decoding takes advantage of the property of the parity matrix that: :Hx = 0 for all x \in C. The ''syndrome'' of the received z=x+e is defined to be: :Hz = H(x+e) =Hx + He = 0 + He = He To perform ML decoding in a
binary symmetric channel A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "f ...
, one has to look-up a precomputed table of size 2^, mapping He to e. Note that this is already of significantly less complexity than that of a standard array decoding. However, under the assumption that no more than t errors were made during transmission, the receiver can look up the value He in a further reduced table of size : \begin \sum_^t \binom\\ \end


List decoding


Information set decoding

This is a family of
Las Vegas Las Vegas (; Spanish for "The Meadows"), often known simply as Vegas, is the 25th-most populous city in the United States, the most populous city in the state of Nevada, and the county seat of Clark County. The city anchors the Las Veg ...
-probabilistic methods all based on the observation that it is easier to guess enough error-free positions, than it is to guess all the error-positions. The simplest form is due to Prange: Let G be the k \times n generator matrix of C used for encoding. Select k columns of G at random, and denote by G' the corresponding submatrix of G. With reasonable probability G' will have full rank, which means that if we let c' be the sub-vector for the corresponding positions of any codeword c = mG of C for a message m, we can recover m as m = c' G'^. Hence, if we were lucky that these k positions of the received word y contained no errors, and hence equalled the positions of the sent codeword, then we may decode. If t errors occurred, the probability of such a fortunate selection of columns is given by \textstyle\binom/\binom. This method has been improved in various ways, e.g. by Stern and Canteaut and Sendrier.


Partial response maximum likelihood

Partial response maximum likelihood (
PRML In computer data storage, partial-response maximum-likelihood (PRML) is a method for recovering the digital data from the weak analog read-back signal picked up by the head of a magnetic disk drive or tape drive. PRML was introduced to recover ...
) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal.


Viterbi decoder

A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using
forward error correction In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea i ...
based on a convolutional code. The
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
is used as a metric for hard decision Viterbi decoders. The ''squared''
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore ...
is used as a metric for soft decision decoders.


Optimal decision decoding algorithm (ODDA)

Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.


See also

*
Don't care alarm In digital logic, a don't-care term (abbreviated DC, historically also known as ''redundancies'', ''irrelevancies'', ''optional entries'', ''invalid combinations'', ''vacuous combinations'', ''forbidden combinations'', ''unused states'' or ''l ...
*
Error detection and correction In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable commu ...
*
Forbidden input In digital logic, a don't-care term (abbreviated DC, historically also known as ''redundancies'', ''irrelevancies'', ''optional entries'', ''invalid combinations'', ''vacuous combinations'', ''forbidden combinations'', ''unused states'' or ''l ...


References


Further reading

* * * {{cite book , author-first=Jacobus H. , author-last=van Lint , title=Introduction to Coding Theory , edition=2 , publisher=
Springer-Verlag Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing. Originally founded in 1842 ...
, series=
Graduate Texts in Mathematics Graduate Texts in Mathematics (GTM) (ISSN 0072-5285) is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standard ...
(GTM) , volume=86 , date=1992 , isbn=978-3-540-54894-2 , url-access=registration , url=https://archive.org/details/introductiontoco0000lint Coding theory