In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, the binary Goppa code is an
error-correcting code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.
The centra ...
that belongs to the class of general Goppa codes originally described by
Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for
cryptography
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
in
McEliece-like cryptosystems and similar setups.
Construction and properties
An irreducible binary Goppa code is defined by a
polynomial
In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
of degree
over a
finite field
In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
with no repeated roots, and a sequence
of
distinct elements from
that are not roots of
.
Codewords belong to the kernel of the syndrome function, forming a subspace of
:
:
The code defined by a tuple
has dimension at least
and
distance at least
, thus it can encode messages of length at least
using codewords of size
while correcting at least
errors. It possesses a convenient
parity-check matrix
In coding theory, a parity-check matrix of a linear block code ''C'' is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also us ...
in form
:
Note that this form of the parity-check matrix, being composed of a
Vandermonde matrix
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an (m + 1) \times (n + 1) matrix
:V = V(x_0, x_1, \cdots, x_m) =
\begin
1 & x_0 & x_0^2 & \dot ...
and
diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
, shares the form with check matrices of
alternant codes, thus alternant decoders can be used on this form. Such decoders usually provide only limited error-correcting capability (in most cases
).
For practical purposes, parity-check matrix of a binary Goppa code is usually converted to a more computer-friendly binary form by a trace construction, that converts the
-by-
matrix over
to a
-by-
binary matrix by writing polynomial coefficients of
elements on
successive rows.
Decoding
Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all
design errors), and is also fairly simple to implement.
Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word
is expected to take a form of
:
Alternative form of a parity-check matrix based on formula for
can be used to produce such syndrome with a simple matrix multiplication.
The algorithm then computes
. That fails when
, but that is the case when the input word is a codeword, so no error correction is necessary.
is reduced to polynomials
and
using the
extended euclidean algorithm
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's id ...
, so that
, while
and
.
Finally, the ''error locator polynomial'' is computed as
. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well.
If the original codeword was decodable and the
was the binary error vector, then
:
Factoring or evaluating all roots of
therefore gives enough information to recover the error vector and fix the errors.
Properties and usage
Binary Goppa codes viewed as a special case of Goppa codes have the interesting property that they correct full
errors, while only
errors in ternary and all other cases. Asymptotically, this error correcting capability meets the famous
Gilbert–Varshamov bound In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov.) is a bound on the size of a (not necessarily linear) code. It is occasionally known as the Gilbert– Shannon–Varshamov bound (or the GSV bou ...
.
Because of the high error correction capacity compared to code rate and form of parity-check matrix (which is usually hardly distinguishable from a random binary matrix of full rank), the binary Goppa codes are used in several
post-quantum cryptosystem
In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption).
Typically, a cryptosystem consists of three algorithms: one for key generation, one ...
s, notably
McEliece cryptosystem and
Niederreiter cryptosystem.
References
* Elwyn R. Berlekamp, Goppa Codes, IEEE Transactions on information theory, Vol. IT-19, No. 5, September 1973, https://web.archive.org/web/20170829142555/http://infosec.seu.edu.cn/space/kangwei/senior_thesis/Goppa.pdf
* Daniela Engelbert, Raphael Overbeck, Arthur Schmidt. "A summary of McEliece-type cryptosystems and their security." Journal of Mathematical Cryptology 1, 151–199. {{MR, 2345114. Previous version: http://eprint.iacr.org/2006/162/
* Daniel J. Bernstein. "List decoding for binary Goppa codes." http://cr.yp.to/codes/goppalist-20110303.pdf
See also
*
BCH codes
*
Code rate
In telecommunication and information theory, the code rate (or information rateHuffman, W. Cary, and Pless, Vera, ''Fundamentals of Error-Correcting Codes'', Cambridge, 2003.) of a forward error correction code is the proportion of the data-stre ...
*
Reed–Solomon error correction
In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960.
They have many applications, including consumer technologies such as MiniDiscs, ...
Coding theory