Codes and their extensions
The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code. Using terms fromClasses of variable-length codes
Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes, and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular:Non-singular codes
A code is non-singular if each source symbol is mapped to a different non-empty bit string; that is, the mapping from source symbols to bit strings is injective. * For example, the mapping is ''not'' non-singular because both "a" and "b" map to the same bit string "0"; any extension of this mapping will generate a lossy (non-lossless) coding. Such singular coding may still be useful when some loss of information is acceptable (for example, when such code is used in audio or video compression, where a lossy coding becomes equivalent to source quantization). * However, the mapping ''is'' non-singular; its extension will generate a lossless coding, which will be useful for general data transmission (but this feature is not always required). It is not necessary for the non-singular code to be more compact than the source (and in many applications, a larger code is useful, for example as a way to detect or recover from encoding or transmission errors, or in security applications to protect a source from undetectable tampering).Uniquely decodable codes
A code is uniquely decodable if its extension is § non-singular. Whether a given code is uniquely decodable can be decided with the Sardinas–Patterson algorithm. * The mapping is uniquely decodable (this can be demonstrated by looking at the ''follow-set'' after each target bit string in the map, because each bitstring is terminated as soon as we see a 0 bit which cannot follow any existing code to create a longer valid code in the map, but unambiguously starts a new code). * Consider again the code from the previous section.This code is based on an example found in Berstel et al. (2009), Example 2.3.1, p. 63. This code is ''not'' uniquely decodable, since the string ''011101110011'' can be interpreted as the sequence of codewords ''01110 – 1110 – 011'', but also as the sequence of codewords ''011 – 1 – 011 – 10011''. Two possible decodings of this encoded string are thus given by ''cdb'' and ''babe''. However, such a code is useful when the set of all possible source symbols is completely known and finite, or when there are restrictions (such as a formal syntax) that determine if source elements of this extension are acceptable. Such restrictions permit the decoding of the original message by checking which of the possible source symbols mapped to the same symbol are valid under those restrictions.Prefix codes
A code is a prefix code if no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept are prefix-free code, instantaneous code, or context-free code. * The example mapping above is ''not'' a prefix code because we do not know after reading the bit string "0" whether it encodes an "a" source symbol, or if it is the prefix of the encodings of the "b" or "c" symbols. * An example of a prefix code is shown below. :: Example of encoding and decoding: ::: → 00100110111010 → , 0, 0, 10, 0, 110, 111, 0, 10, → A special case of prefix codes are block codes. Here, all codewords must have the same length. The latter are not very useful in the context of source coding, but often serve as forward error correction in the context ofAdvantages
The advantage of a variable-length code is that unlikely source symbols can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a low ''expected'' codeword length. For the above example, if the probabilities of (a, b, c, d) were , the expected number of bits used to represent a source symbol using the code above would be: :: . As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered with ''zero'' error.See also
* Golomb code * Kruskal count * Variable-length instruction sets in computingReferences
Further reading
* (xii+191 pages