Linear Code
   HOME

TheInfoList



OR:

In
coding theory Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are stud ...
, a linear code is an
error-correcting code In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea i ...
for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although
turbo code In information theory, turbo codes (originally in French ''Turbocodes'') are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely ...
s can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. syndrome decoding). Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g.,
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented a ...
s) on a
communications channel A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for inform ...
so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent. A linear code of length ''n'' transmits blocks containing ''n'' symbols. For example, the ,4,3
Hamming code In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the s ...
is a linear
binary code A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, als ...
which represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected. This code contains 24=16 codewords.


Definition and parameters

A linear code of length ''n'' and dimension ''k'' is a linear subspace ''C'' with
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
''k'' of the
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but ...
\mathbb_q^n where \mathbb_q is the
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subt ...
with ''q'' elements. Such a code is called a ''q''-ary code. If ''q'' = 2 or ''q'' = 3, the code is described as a binary code, or a ternary code respectively. The vectors in ''C'' are called ''codewords''. The size of a code is the number of codewords and equals ''q''''k''. The weight of a codeword is the number of its elements that are nonzero and the distance between two codewords is the Hamming distance between them, that is, the number of elements in which they differ. The distance ''d'' of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of length ''n'', dimension ''k'', and distance ''d'' is called an 'n'',''k'',''d''code (or, more precisely, ,k,dq code). We want to give \mathbb_q^n the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (a binary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to.


Generator and check matrices

As a linear subspace of \mathbb_q^n, the entire code ''C'' (which may be very large) may be represented as the
span Span may refer to: Science, technology and engineering * Span (unit), the width of a human hand * Span (engineering), a section between two intermediate supports * Wingspan, the distance between the wingtips of a bird or aircraft * Sorbitan es ...
of a set of k codewords (known as a basis in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matric ...
). These basis codewords are often collated in the rows of a matrix G known as a generating matrix for the code ''C''. When G has the block matrix form \boldsymbol = P/math>, where I_k denotes the k \times k identity matrix and P is some k \times (n-k) matrix, then we say G is in standard form. A matrix ''H'' representing a linear function \phi : \mathbb_q^n\to \mathbb_q^ whose
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine lea ...
is ''C'' is called a check matrix of ''C'' (or sometimes a parity check matrix). Equivalently, ''H'' is a matrix whose null space is ''C''. If ''C'' is a code with a generating matrix ''G'' in standard form, \boldsymbol = P/math>, then \boldsymbol = I_ /math> is a check matrix for C. The code generated by ''H'' is called the dual code of C. It can be verified that G is a k \times n matrix, while H is a (n-k) \times n matrix. Linearity guarantees that the minimum Hamming distance ''d'' between a codeword ''c''0 and any of the other codewords ''c'' ≠ ''c''0 is independent of ''c''0. This follows from the property that the difference ''c'' − ''c''0 of two codewords in ''C'' is also a codeword (i.e., an element of the subspace ''C''), and the property that ''d''(''c'', c0) = ''d''(''c'' − ''c''0, 0). These properties imply that :\min_d(c,c_0)=\min_d(c-c_0, 0)=\min_d(c, 0)=d. In other words, in order to find out the minimum distance between the codewords of a linear code, one would only need to look at the non-zero codewords. The non-zero codeword with the smallest weight has then the minimum distance to the zero codeword, and hence determines the minimum distance of the code. The distance ''d'' of a linear code ''C'' also equals the minimum number of linearly dependent columns of the check matrix ''H''. ''Proof:'' Because \boldsymbol \cdot \boldsymbol^T = \boldsymbol, which is equivalent to \sum_^n (c_i \cdot \boldsymbol) = \boldsymbol, where \boldsymbol is the i^ column of \boldsymbol. Remove those items with c_i=0, those \boldsymbol with c_i \neq 0 are linearly dependent. Therefore, d is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns \ where S is the column index set. \sum_^n (c_i \cdot \boldsymbol) = \sum_ (c_j \cdot \boldsymbol) + \sum_ (c_j \cdot \boldsymbol) = \boldsymbol. Now consider the vector \boldsymbol such that c_j'=0 if j \notin S. Note \boldsymbol \in C because \boldsymbol \cdot \boldsymbol^T = \boldsymbol . Therefore, we have d \le wt(\boldsymbol) , which is the minimum number of linearly dependent columns in \boldsymbol. The claimed property is therefore proven.


Example: Hamming codes

As the first class of linear codes developed for error correction purpose, ''Hamming codes'' have been widely used in digital communication systems. For any positive integer r \ge 2 , there exists a ^r-1, 2^r-r-1,32 Hamming code. Since d=3, this Hamming code can correct a 1-bit error. Example : The linear block code with the following generator matrix and parity check matrix is a ,4,32 Hamming code. : \boldsymbol=\begin 1\ 0\ 0\ 0\ 1\ 1\ 0 \\ 0\ 1\ 0\ 0\ 0\ 1\ 1 \\ 0\ 0\ 1\ 0\ 1\ 1\ 1 \\ 0\ 0\ 0\ 1\ 1\ 0\ 1 \end , \boldsymbol=\begin 1\ 0\ 1\ 1\ 1\ 0\ 0 \\ 1\ 1\ 1\ 0\ 0\ 1\ 0 \\ 0\ 1\ 1\ 1\ 0\ 0\ 1 \end


Example: Hadamard codes

Hadamard code is a ^r, r, 2^2 linear code and is capable of correcting many errors. Hadamard code could be constructed column by column : the i^ column is the bits of the binary representation of integer i, as shown in the following example. Hadamard code has minimum distance 2^ and therefore can correct 2^-1 errors. Example: The linear block code with the following generator matrix is a ,3,42 Hadamard code: \boldsymbol_=\begin 0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\\ 0\ 0\ 1\ 1\ 0\ 0\ 1\ 1\\ 0\ 1\ 0\ 1\ 0\ 1\ 0\ 1\end. Hadamard code is a special case of Reed–Muller code. If we take the first column (the all-zero column) out from \boldsymbol_, we get ,3,42 ''simplex code'', which is the ''dual code '' of Hamming code.


Nearest neighbor algorithm

The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm): Input: A ''received vector'' v in \mathbb_q^n . Output: A codeword w in C closest to v, if any. *Starting with t=0, repeat the following two steps. *Enumerate the elements of the ball of (Hamming) radius t around the received word v, denoted B_t(v). **For each w in B_t(v), check if w in C. If so, return w as the solution. *Increment t. Fail only when t > (d - 1)/2 so enumeration is complete and no solution has been found. We say that a linear C is t-error correcting if there is at most one codeword in B_t(v), for each v in \mathbb_q^n.


Popular notation

Codes in general are often denoted by the letter ''C'', and a code of length ''n'' and of rank ''k'' (i.e., having ''k'' code words in its basis and ''k'' rows in its ''generating matrix'') is generally referred to as an (''n'', ''k'') code. Linear block codes are frequently denoted as 'n'', ''k'', ''d''codes, where ''d'' refers to the code's minimum Hamming distance between any two code words. (The 'n'', ''k'', ''d''notation should not be confused with the (''n'', ''M'', ''d'') notation used to denote a ''non-linear'' code of length ''n'', size ''M'' (i.e., having ''M'' code words), and minimum Hamming distance ''d''.)


Singleton bound

''Lemma'' ( Singleton bound): Every linear ,k,dcode C satisfies k+d \leq n+1. A code C whose parameters satisfy k+d=n+1 is called maximum distance separable or MDS. Such codes, when they exist, are in some sense best possible. If C1 and C2 are two codes of length n and if there is a permutation p in the
symmetric group In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric group ...
Sn for which (c1,...,cn) in C1 if and only if (cp(1),...,cp(n)) in C2, then we say C1 and C2 are permutation equivalent. In more generality, if there is an n\times n monomial matrix M\colon \mathbb_q^n \to \mathbb_q^n which sends C1 isomorphically to C2 then we say C1 and C2 are equivalent. ''Lemma'': Any linear code is permutation equivalent to a code which is in standard form.


Bonisoli's theorem

A code is defined to be equidistant if and only if there exists some constant ''d'' such that the distance between any two of the code's distinct codewords is equal to ''d''. In 1984 Arrigo Bonisoli determined the structure of linear one-weight codes over finite fields and proved that every equidistant linear code is a sequence of
dual Dual or Duals may refer to: Paired/two things * Dual (mathematics), a notion of paired concepts that mirror one another ** Dual (category theory), a formalization of mathematical duality *** see more cases in :Duality theories * Dual (grammatical ...
Hamming code In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the s ...
s.


Examples

Some examples of linear codes include: *
Repetition code In coding theory, the repetition code is one of the most basic error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the me ...
s * Parity codes * Cyclic codes *
Hamming code In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the s ...
s * Golay code, both the binary and
ternary Ternary (from Latin ''ternarius'') or trinary is an adjective meaning "composed of three items". It can refer to: Mathematics and logic * Ternary numeral system, a base-3 counting system ** Balanced ternary, a positional numeral system, usef ...
versions * Polynomial codes, of which BCH codes are an example * Reed–Solomon codes * Reed–Muller codes *
Goppa code In mathematics, an algebraic geometric code (AG-code), otherwise known as a Goppa code, is a general type of linear code constructed by using an algebraic curve X over a finite field \mathbb_q. Such codes were introduced by Valerii Denisovich G ...
s * Low-density parity-check codes *
Expander code In coding theory, expander codes form a class of error-correcting codes that are constructed from bipartite expander graphs. Along with Justesen codes, expander codes are of particular interest since they have a constant positive rate, a const ...
s * Multidimensional parity-check codes * Toric codes *
Turbo code In information theory, turbo codes (originally in French ''Turbocodes'') are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely ...
s


Generalization

Hamming spaces over non-field alphabets have also been considered, especially over finite rings, most notably Galois rings over Z4. This gives rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between \mathbb_2^ (i.e. GF(22m)) with the Hamming distance and \mathbb_4^m (also denoted as GR(4,m)) with the Lee distance; its main attraction is that it establishes a correspondence between some "good" codes that are not linear over \mathbb_2^ as images of ring-linear codes from \mathbb_4^m. More recently, some authors have referred to such codes over rings simply as linear codes as well.


See also

* Decoding methods


References


Bibliography

* Chapter 5 contains a more gentle introduction (than this article) to the subject of linear codes.


External links


''q''-ary code generator program

Code Tables: Bounds on the parameters of various types of codes
''IAKS, Fakultät für Informatik, Universität Karlsruhe (TH)]''. Online, up to date table of the optimal binary codes, includes non-binary codes.
The database of Z4 codes
Online, up to date database of optimal Z4 codes. {{DEFAULTSORT:Linear Code Coding theory Finite fields