HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, the binary Goppa code is an
error-correcting code In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The centra ...
that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for
cryptography Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
in McEliece-like cryptosystems and similar setups.


Construction and properties

An irreducible binary Goppa code is defined by a
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
g(x) of degree t over a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
GF(2^m) with no repeated roots, and a sequence L_1, ..., L_n of n distinct elements from GF(2^m) that are not roots of g. Codewords belong to the kernel of the syndrome function, forming a subspace of \^n: : \Gamma(g,L)=\left\ The code defined by a tuple (g,L) has dimension at least n-mt and distance at least 2t+1, thus it can encode messages of length at least n-mt using codewords of size n while correcting at least \left\lfloor \frac \right\rfloor errors. It possesses a convenient
parity-check matrix In coding theory, a parity-check matrix of a linear block code ''C'' is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also us ...
H in form : H=VD=\begin 1 & 1 & 1 & \cdots & 1\\ L_1^1 & L_2^1 & L_3^1 & \cdots & L_^1\\ L_1^2 & L_2^2 & L_3^2 & \cdots & L_^2\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_1^ & L_2^ & L_3^ & \cdots & L_^ \end \begin \frac & & & & \\ & \frac & & & \\ & & \frac & & \\ & & & \ddots & \\ & & & & \frac \end Note that this form of the parity-check matrix, being composed of a
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an (m + 1) \times (n + 1) matrix :V = V(x_0, x_1, \cdots, x_m) = \begin 1 & x_0 & x_0^2 & \dot ...
V and
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
D, shares the form with check matrices of alternant codes, thus alternant decoders can be used on this form. Such decoders usually provide only limited error-correcting capability (in most cases t/2). For practical purposes, parity-check matrix of a binary Goppa code is usually converted to a more computer-friendly binary form by a trace construction, that converts the t-by-n matrix over GF(2^m) to a mt-by-n binary matrix by writing polynomial coefficients of GF(2^m) elements on m successive rows.


Decoding

Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all t design errors), and is also fairly simple to implement. Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word c=(c_1,\dots,c_n) is expected to take a form of : s(x) \equiv \sum_^n \frac \mod g(x) Alternative form of a parity-check matrix based on formula for s(x) can be used to produce such syndrome with a simple matrix multiplication. The algorithm then computes v(x) \equiv \sqrt \mod g(x). That fails when s(x) \equiv 0, but that is the case when the input word is a codeword, so no error correction is necessary. v(x) is reduced to polynomials a(x) and b(x) using the
extended euclidean algorithm In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's id ...
, so that a(x) \equiv b(x)\cdot v(x) \mod g(x), while \deg(a)\leq\lfloor t/2 \rfloor and \deg(b)\leq\lfloor (t-1)/2 \rfloor. Finally, the ''error locator polynomial'' is computed as \sigma(x) = a(x)^2 + x\cdot b(x)^2. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well. If the original codeword was decodable and the e=(e_1,\dots,e_n) was the binary error vector, then : \sigma(x) = \prod_^n (x-L_i)^ Factoring or evaluating all roots of \sigma(x) therefore gives enough information to recover the error vector and fix the errors.


Properties and usage

Binary Goppa codes viewed as a special case of Goppa codes have the interesting property that they correct full \deg(g) errors, while only \deg(g)/2 errors in ternary and all other cases. Asymptotically, this error correcting capability meets the famous
Gilbert–Varshamov bound In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov.) is a bound on the size of a (not necessarily linear) code. It is occasionally known as the Gilbert– Shannon–Varshamov bound (or the GSV bou ...
. Because of the high error correction capacity compared to code rate and form of parity-check matrix (which is usually hardly distinguishable from a random binary matrix of full rank), the binary Goppa codes are used in several post-quantum
cryptosystem In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption). Typically, a cryptosystem consists of three algorithms: one for key generation, one ...
s, notably McEliece cryptosystem and Niederreiter cryptosystem.


References

* Elwyn R. Berlekamp, Goppa Codes, IEEE Transactions on information theory, Vol. IT-19, No. 5, September 1973, https://web.archive.org/web/20170829142555/http://infosec.seu.edu.cn/space/kangwei/senior_thesis/Goppa.pdf * Daniela Engelbert, Raphael Overbeck, Arthur Schmidt. "A summary of McEliece-type cryptosystems and their security." Journal of Mathematical Cryptology 1, 151–199. {{MR, 2345114. Previous version: http://eprint.iacr.org/2006/162/ * Daniel J. Bernstein. "List decoding for binary Goppa codes." http://cr.yp.to/codes/goppalist-20110303.pdf


See also

* BCH codes *
Code rate In telecommunication and information theory, the code rate (or information rateHuffman, W. Cary, and Pless, Vera, ''Fundamentals of Error-Correcting Codes'', Cambridge, 2003.) of a forward error correction code is the proportion of the data-stre ...
*
Reed–Solomon error correction In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, including consumer technologies such as MiniDiscs, ...
Coding theory