Binary Goppa Code
   HOME

TheInfoList



OR:

In mathematics and
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includi ...
, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
in McEliece-like cryptosystems and similar setups.


Construction and properties

A binary Goppa code is defined by a
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example ...
g(x) of degree t over a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
GF(2^m) with no repeated roots, and a sequence L_1, ..., L_n of n distinct elements from GF(2^m) that are not roots of g. Codewords belong to the kernel of the syndrome function, forming a subspace of \^n: : \Gamma(g,L)=\left\ The code defined by a tuple (g,L) has dimension at least n-mt and distance at least 2t+1, thus it can encode messages of length at least n-mt using codewords of size n while correcting at least \left\lfloor \frac \right\rfloor errors. It possesses a convenient
parity-check matrix In coding theory, a parity-check matrix of a linear block code ''C'' is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used ...
H in form : H=VD=\begin 1 & 1 & 1 & \cdots & 1\\ L_1^1 & L_2^1 & L_3^1 & \cdots & L_^1\\ L_1^2 & L_2^2 & L_3^2 & \cdots & L_^2\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_1^ & L_2^ & L_3^ & \cdots & L_^ \end \begin \frac & & & & \\ & \frac & & & \\ & & \frac & & \\ & & & \ddots & \\ & & & & \frac \end Note that this form of the parity-check matrix, being composed of a
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_ ...
V and
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
D, shares the form with check matrices of alternant codes, thus alternant decoders can be used on this form. Such decoders usually provide only limited error-correcting capability (in most cases t/2). For practical purposes, parity-check matrix of a binary Goppa code is usually converted to a more computer-friendly binary form by a trace construction, that converts the t-by-n matrix over GF(2^m) to a mt-by-n binary matrix by writing polynomial coefficients of GF(2^m) elements on m successive rows.


Decoding

Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all t design errors), and is also fairly simple to implement. Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word c=(c_0,\dots,c_) is expected to take a form of : s(x) \equiv \sum_^ \frac \mod g(x) Alternative form of a parity-check matrix based on formula for s(x) can be used to produce such syndrome with a simple matrix multiplication. The algorithm then computes v(x) \equiv \sqrt \mod g(x). That fails when s(x) \equiv 0, but that is the case when the input word is a codeword, so no error correction is necessary. v(x) is reduced to polynomials a(x) and b(x) using the extended euclidean algorithm, so that a(x) \equiv b(x)\cdot v(x) \mod g(x), while \deg(a)\leq\lfloor t/2 \rfloor and \deg(b)\leq\lfloor (t-1)/2 \rfloor. Finally, the ''error locator polynomial'' is computed as \sigma(x) = a(x)^2 + x\cdot b(x)^2. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well. If the original codeword was decodable and the e=(e_0,e_1,\dots,e_) was the binary error vector, then : \sigma(x) = \prod_^ e_i(x-L_i) Factoring or evaluating all roots of \sigma(x) therefore gives enough information to recover the error vector and fix the errors.


Properties and usage

Binary Goppa codes viewed as a special case of Goppa codes have the interesting property that they correct full \deg(g) errors, while only \deg(g)/2 errors in ternary and all other cases. Asymptotically, this error correcting capability meets the famous
Gilbert–Varshamov bound In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov.) is a limit on the parameters of a (not necessarily linear) code. It is occasionally known as the Gilbert– Shannon–Varshamov bound (or the GS ...
. Because of the high error correction capacity compared to code rate and form of parity-check matrix (which is usually hardly distinguishable from a random binary matrix of full rank), the binary Goppa codes are used in several post-quantum cryptosystems, notably McEliece cryptosystem and Niederreiter cryptosystem.


References

* Elwyn R. Berlekamp, Goppa Codes, IEEE Transactions on information theory, Vol. IT-19, No. 5, September 1973, https://web.archive.org/web/20170829142555/http://infosec.seu.edu.cn/space/kangwei/senior_thesis/Goppa.pdf * Daniela Engelbert, Raphael Overbeck, Arthur Schmidt. "A summary of McEliece-type cryptosystems and their security." Journal of Mathematical Cryptology 1, 151–199. {{MR, 2345114. Previous version: http://eprint.iacr.org/2006/162/ * Daniel J. Bernstein. "List decoding for binary Goppa codes." http://cr.yp.to/codes/goppalist-20110303.pdf


See also

*
BCH codes In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called ''Galois field''). BCH codes were invented in 1959 ...
*
Code rate In telecommunication and information theory, the code rate (or information rateHuffman, W. Cary, and Pless, Vera, ''Fundamentals of Error-Correcting Codes'', Cambridge, 2003.) of a forward error correction code is the proportion of the data-str ...
* Reed–Solomon error correction Coding theory