Block Length
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of ''all'' block codes in a unified way. Such limitations often take the form of ''bounds'' that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, and Reed–Muller codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using boolean poly ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hadamard Code
The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh. The Hadamard code is an example of a linear code of length 2^m over a binary alphabet. Unfortunately, this term is somewhat ambiguous as some references assume a message length k = m while others assume a message length of k = m+1. In this article, the first case is called the Hadamard code while the second is called the augmented Hadam ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Rank Error-correcting Code
In coding theory, rank codes (also called Gabidulin codes) are non-binaryCodes for which each input symbol is from a set of size greater than 2. linear error-correcting codes over not Hamming but ''rank'' metric. They described a systematic way of building codes that could detect and correct multiple random ''rank'' errors. By adding redundancy with coding ''k''-symbol word to a ''n''-symbol word, a rank code can correct any errors of rank up to ''t'' = ⌊ (''d'' − 1) / 2 ⌋, where ''d'' is a code distance. As an erasure code, it can correct up to ''d'' − 1 known erasures. A rank code is an algebraic linear code over the finite field GF(q^N) similar to Reed–Solomon code. The rank of the vector over GF(q^N) is the maximum number of linearly independent components over GF(q). The rank distance between two vectors over GF(q^N) is the rank of the difference of these vectors. The rank code corrects all errors with rank of the er ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hamming(7,4)
In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term ''Hamming code'' often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes. The Hamming code adds three additional check bits to every four data bits of the message. Hamming's (7,4) algorithm can correct any single-bit error, or detect all single-bit and two-bit errors. In other words, the minimal Hamming distance between any two correct codewords is 3, and received words can be correctly decoded if they are at a distance of at most one from the codeword that was transmitted by the sender. This means that for transmission medium situations where burst errors do not occur, Hamming' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hamming Code
In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data. In mathematical terms, Hamming codes are a class of binary linear code. For each integer there is a code-word with block length and message length . Hence the rate of Hamming codes is , which is the highest pos ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Expander Code
In coding theory, expander codes form a class of error-correcting codes that are constructed from bipartite expander graphs. Along with Justesen codes, expander codes are of particular interest since they have a constant positive rate, a constant positive relative distance, and a constant alphabet size. In fact, the alphabet contains only two elements, so expander codes belong to the class of binary codes. Furthermore, expander codes can be both encoded and decoded in time proportional to the block length of the code. Expander codes In coding theory, an expander code is a ,n-m2\, linear block code whose parity check matrix is the adjacency matrix of a bipartite expander graph. These codes have good relative distance 2(1-\varepsilon)\gamma\,, where \varepsilon\, and \gamma\, are properties of the expander graph as defined later), rate \left(1-\tfrac\right)\,, and decodability (algorithms of running time O(n)\, exist). Definition Let B be a (c,d)-biregular graph between a se ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Reed–Muller Code
Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science. Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code. Reed–Muller codes are linear block codes that are locally testable, locally decodable, and list decodable. These properties make them particularly useful in the design of probabilistically checkable proofs. Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When ''r'' and ''m'' are integers with 0 ≤ ''r'' ≤ ''m'', the Reed–Muller code with parameters ''r'' and ''m'' is denoted as RM(''r'', ''m''). When as ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error control (or ''channel coding'') # Cryptographic coding # Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, ZIP data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compressio ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hamming Distance
In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to change one string into the other, or the minimum number of ''errors'' that could have transformed one string into the other. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences. It is named after the American mathematician Richard Hamming. A major application is in coding theory, more specifically to block codes, in which the equal-length strings are vectors over a finite field. Definition The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are different. Examples The symbols may be letters, bits, or decimal digits, among other possibilities. For example, the Hamming distance betwe ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Prime Power
In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number. For example: , and are prime powers, while , and are not. The sequence of prime powers begins: 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, … . The prime powers are those positive integers that are divisible by exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also called primary numbers, as in the primary decomposition. Properties Algebraic properties Prime powers are powers of prime numbers. Every prime power (except powers of 2) has a primitive root; thus the multiplicative group of integers modulo ''p''''n'' (i.e. the group of units of the ring Z/''p''''n' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Finite Field
In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod when is a prime number. The ''order'' of a finite field is its number of elements, which is either a prime number or a prime power. For every prime number and every positive integer there are fields of order p^k, all of which are isomorphic. Finite fields are fundamental in a number of areas of mathematics and computer science, including number theory, algebraic geometry, Galois theory, finite geometry, cryptography and coding theory. Properties A finite field is a finite set which is a field; this means that multiplication, addition, subtraction and division (excluding division by zero) ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Information Theory
Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include s ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Maximum Distance Separable Code
In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code C with block length n, size M and minimum distance d. It is also known as the Joshibound. proved by and even earlier by . Statement of the bound The minimum distance of a set C of codewords of length n is defined as d = \min_ d(x,y) where d(x,y) is the Hamming distance between x and y. The expression A_(n,d) represents the maximum number of possible codewords in a q-ary block code of length n and minimum distance d. Then the Singleton bound states that A_q(n,d) \leq q^. Proof First observe that the number of q-ary words of length n is q^n, since each letter in such a word may take one of q different values, independently of the remaining letters. Now let C be an arbitrary q-ary block code of minimum distance d. Clearly, all codewords c \in C are distinct. If we puncture the code by deleting the first d-1 letters of each code ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |