HOME

TheInfoList



OR:

In
coding theory Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied ...
, Justesen codes form a class of
error-correcting codes In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea is ...
that have a constant rate, constant relative distance, and a constant alphabet size. Before the Justesen error correction code was discovered, no error correction code was known that had all of these three parameters as a constant. Subsequently, other ECC codes with this property have been discovered, for example expander codes. These codes have important applications in
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includi ...
such as in the construction of small-bias sample spaces. Justesen codes are derived as the code concatenation of a Reed–Solomon code and the Wozencraft ensemble. The Reed–Solomon codes used achieve constant rate and constant relative distance at the expense of an alphabet size that is ''linear'' in the message length. The Wozencraft ensemble is a family of codes that achieve constant rate and constant alphabet size, but the relative distance is only constant for most of the codes in the family. The concatenation of the two codes first encodes the message using the Reed–Solomon code, and then encodes each symbol of the codeword further using a code from the Wozencraft ensemble – using a different code of the ensemble at each position of the codeword. This is different from usual code concatenation where the inner codes are the same for each position. The Justesen code can be constructed very efficiently using only
logarithmic space In computational complexity theory, L (also known as LSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space., Definition& ...
.


Definition

The Justesen code is the concatenation of an (N,K,D)_ outer code C_ and different (n,k,d)_q inner codes C_^i, for1 \le i \le N. More precisely, the concatenation of these codes, denoted by C_ \circ (C_^1 ,...,C_^N ), is defined as follows. Given a message m \in ^kK, we compute the codeword produced by an outer code C_: C_(m) = (c_1,c_2,..,c_N). Then we apply each code of N linear inner codes to each coordinate of that codeword to produce the final codeword; that is, C_ \circ (C_^1,..,C_^N)(m) = (C_^1(c_1),C_^2(c_2),..,C_^N(c_N)). Look back to the definition of the outer code and linear inner codes, this definition of the Justesen code makes sense because the codeword of the outer code is a vector with N elements, and we have N linear inner codes to apply for those N elements. Here for the Justesen code, the outer code C_ is chosen to be Reed Solomon code over a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
\mathbb_ evaluated over \mathbb_-\ of rate R, 0 < R < 1. The outer code C_ have the relative distance \delta_ = 1 - R and block length of N = q^k-1. The set of inner codes is the Wozencraft ensemble \ _.


Property of Justesen code

As the linear codes in the Wonzencraft ensemble have the rate \frac, Justesen code is the concatenated code C^* = C_ \circ (C_^1,C_^2,..,C_^N) with the rate \frac. We have the following theorem that estimates the distance of the concatenated code C^*.


Theorem

Let \varepsilon > 0. Then C^* has relative distance of at least (1-R-\varepsilon)H_q^ \left (\tfrac-\varepsilon \right).


Proof

In order to prove a lower bound for the distance of a code C^* we prove that the Hamming distance of an arbitrary but distinct pair of codewords has a lower bound. So let \Delta(c^1,c^2) be the Hamming distance of two codewords c^1 and c^2. For any given :m_1 \neq m_2 \in \left (\mathbb_ \right )^K, we want a lower bound for \Delta(C^*(m_1),C^*(m_2)). Notice that if C_(m) = (c_1,\cdots,c_N), then C^*(m) = (C_^1(c_1),\cdots,C_^N(c_N)). So for the lower bound \Delta(C^*(m_1),C^*(m_2)), we need to take into account the distance of C_^1, \cdots, C_^N. Suppose :\begin C_(m_1) &= \left (c_1^1,\cdots,c_N^1 \right ) \\ C_(m_2) &= \left (c_1^2,\cdots,c_N^2 \right ) \end Recall that \left \ is a Wozencraft ensemble. Due to "Wonzencraft ensemble theorem", there are at least (1-\varepsilon) N linear codes C_^i that have distance H_q^ \left (\tfrac-\varepsilon \right ) \cdot 2k. So if for some 1 \leqslant i \leqslant N, c_i^1 \ne c_i^2 and the code C_^i has distance \geqslant H_q^ \left (\tfrac-\varepsilon \right) \cdot 2k, then :\Delta \left (C_^i \left (c_i^1 \right), C_^i \left (c_i^2 \right ) \right ) \geqslant H_q^ \left (\tfrac-\varepsilon \right ) \cdot 2k. Further, if we have T numbers 1 \leqslant i \leqslant N such that c_i^1 \ne c_i^2 and the code C_^i has distance \geqslant H_q^(\tfrac-\varepsilon) \cdot 2k, then :\Delta \left (C^*(m_1),C^*(m_2) \right ) \geqslant H_q^ \left (\tfrac-\varepsilon \right ) \cdot 2k \cdot T. So now the final task is to find a lower bound for T. Define: : S = \left \. Then T is the number of linear codes C_^i, i \in S having the distance H_q^\left (\tfrac-\varepsilon \right) \cdot 2k. Now we want to estimate , S, . Obviously , S, = \Delta(C_(m_1),C_(m_2)) \geqslant (1-R)N. Due to the Wozencraft Ensemble Theorem, there are at most \varepsilon N linear codes having distance less than H_q^(\tfrac-\varepsilon) \cdot 2k, so :T \geqslant , S, - \varepsilon N \geqslant (1-R)N - \varepsilon N = (1-R-\varepsilon)N. Finally, we have :\Delta(C^*(m_1),C^*(m_2)) \geqslant H_q^ \left (\tfrac-\varepsilon \right ) \cdot 2k \cdot T \geqslant H_q^ \left (\tfrac-\varepsilon \right ) \cdot 2k \cdot (1-R-\varepsilon) \cdot N. This is true for any arbitrary m_1 \ne m_2. So C^* has the relative distance at least (1-R-\varepsilon)H_q^ \left (\tfrac-\varepsilon \right ), which completes the proof.


Comments

We want to consider the "strongly explicit code". So the question is what the "strongly explicit code" is. Loosely speaking, for linear code, the "explicit" property is related to the complexity of constructing its generator matrix G. That in effect means that we can compute the matrix in logarithmic space without using the brute force algorithm to verify that a code has a given satisfied distance. For the other codes that are not linear, we can consider the complexity of the encoding algorithm. So by far, we can see that the Wonzencraft ensemble and Reed-Solomon codes are strongly explicit. Therefore, we have the following result: ''Corollary:'' The concatenated code C^* is an asymptotically good code(that is, rate R > 0 and relative distance \delta > 0 for small q) and has a strongly explicit construction.


An example of a Justesen code

The following slightly different code is referred to as the Justesen code in MacWilliams/MacWilliams. It is the particular case of the above-considered Justesen code for a very particular Wonzencraft ensemble: Let ''R'' be a Reed-Solomon code of length ''N'' = 2''m'' − 1,
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
''K'' and minimum weight ''N'' − ''K'' + 1. The symbols of ''R'' are elements of ''F'' = GF(2''m'') and the codewords are obtained by taking every polynomial ƒ over ''F'' of degree less than ''K'' and listing the values of ƒ on the non-zero elements of ''F'' in some predetermined order. Let α be a primitive element of ''F''. For a codeword a = (''a''1, ..., ''a''''N'') from ''R'', let b be the vector of length 2''N'' over ''F'' given by : \mathbf = \left( a_1, a_1, a_2, \alpha^1 a_2, \ldots, a_N, \alpha^ a_N \right) and let c be the vector of length 2''N'' ''m'' obtained from ''b'' by expressing each element of ''F'' as a binary vector of length ''m''. The ''Justesen code'' is the linear code containing all such c. The parameters of this code are length 2''m'' ''N'', dimension ''m'' ''K'' and minimum distance at least : \sum_^\ell i \binom , where \ell is the greatest integer satisfying \sum_^\ell \binom\leq N-K+1. (See MacWilliams/MacWilliams for a proof.)


See also

* Wozencraft ensemble *
Concatenated error correction code In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both expo ...
* Reed-Solomon error correction *
Linear Code In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as ...


References


Lecture 28: Justesen Code. Coding theory's course. Prof. Atri Rudra

Lecture 6: Concatenated codes. Forney codes. Justesen codes. Essential Coding Theory
* * {{cite book , author=F.J. MacWilliams , author-link=Jessie MacWilliams , author2=N.J.A. Sloane , title=The Theory of Error-Correcting Codes , url=https://archive.org/details/theoryoferrorcor0000macw , url-access=registration , publisher=North-Holland , year=1977 , isbn=0-444-85193-3 , page
306–316
Error detection and correction Finite fields Coding theory