Gilbert–Varshamov Bound For Linear Codes
   HOME





Gilbert–Varshamov Bound For Linear Codes
The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field \mathbb_q. This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of ''q''-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. For larger alphabets, algebraic geometry codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert–Varshamov bound. Gilbert–Varshamov bound theorem :Theorem: Let q \g ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gilbert–Varshamov Bound
In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov.) is a bound on the size of a (not necessarily linear) code. It is occasionally known as the Gilbert– Shannon–Varshamov bound (or the GSV bound), but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes. Statement of the bound Recall that a code has a minimum distance d if any two elements in the code are at least a distance d apart. Let :A_q(n,d) denote the maximum possible size of a ''q''-ary code C with length ''n'' and minimum Hamming distance ''d'' (a ''q''-ary code is a code over the field \mathbb_q of ''q'' elements). Then: :A_q(n,d) \geqslant \frac. Proof Let C be a code of length n and minimum Hamming distance d having maximal size: :, C, =A_q(n,d). Then for all x\in\mathbb_q^n , there exists ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hamming Distance
In information theory, the Hamming distance between two String (computer science), strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to change one string into the other, or equivalently, the minimum number of ''errors'' that could have transformed one string into the other. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences. It is named after the American mathematician Richard Hamming. A major application is in coding theory, more specifically to block codes, in which the equal-length strings are Vector space, vectors over a finite field. Definition The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are different. Examples The symbols may be letters, bits, or decimal digits, am ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hamming Bound
In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of Sphere packing, packing balls in the Hamming metric into the Space (mathematics), space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its Code word (communication), code words are embedded. A code that attains the Hamming bound is said to be a perfect code. Background on error-correcting codes An original message and an encoded version are both composed in an alphabet of ''q'' letters. Each Code word (communication), code word contains ''n'' letters. The original message (of length ''m'') is shorter than ''n'' letters. The message is converted into an ''n''-letter codeword by an encoding algorithm, transmitted over a noisy communication channel, ch ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Las Vegas Algorithm
In computing, a Las Vegas algorithm is a randomized algorithm that always gives Correctness (computer science), correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the input. The usual definition of a Las Vegas algorithm includes the restriction that the ''expected'' runtime be finite, where the expectation is carried out over the space of random information, or entropy, used in the algorithm. An alternative definition requires that a Las Vegas algorithm always terminates (is Effective method, effective), but may output a Partial function#Bottom element, symbol not part of the solution space to indicate failure in finding a solution. The nature of Las Vegas algorithms makes them suitable in situations where the number of possible solutions is limited, and where verifying the correctness of a candidate solution is relatively easy while finding a solution is complex. Systema ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hamming Ball
In combinatorics, a Hamming ball is a metric ball for Hamming distance. The Hamming ball of radius r centered at a String (computer science), string x over some Alphabet (formal languages), alphabet (often the alphabet ) is the set of all strings of the same length that differ from x in at most r positions. This may be denoted using the standard notation for metric balls, B(s,r). For an alphabet X and a string x, the Hamming ball is a subset of the Hamming space X^ of strings of the same length as x, and it is a proper subset whenever r<, x, . The name ''Hamming ball'' comes from coding theory, where error correction codes can be defined as having disjoint Hamming balls around their codewords, and covering codes can be defined as having Hamming balls around the codeword whose union is the whole Hamming space. Some Local search (constraint satisfaction), local search algorithms for SAT solvers such as WalkSAT operate by using random guessing or covering codes to find a Ham ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Boole's Inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. This inequality provides an upper bound on the probability of occurrence of at least one of a countable number of events in terms of the individual probabilities of the events. Boole's inequality is named for its discoverer, George Boole. Formally, for a countable set of events ''A''1, ''A''2, ''A''3, ..., we have :\left(\bigcup_^ A_i \right) \le \sum_^ (A_i). In measure-theoretic terms, Boole's inequality follows from the fact that a measure (and certainly any probability measure) is ''σ''- sub-additive. Thus Boole's inequality holds not only for probability measures , but more generally when is replaced by any finite measure. Proof Proof using induction Boole's inequality may be proved for finite collections of n e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Generator Matrix
In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix. Terminology If G is a matrix, it generates the codewords of a linear code ''C'' by : w=sG where w is a codeword of the linear code ''C'', and s is any input vector. Both w and s are assumed to be row vectors. A generator matrix for a linear , k, dq-code has format k \times n, where ''n'' is the length of a codeword, ''k'' is the number of information bits (the dimension of ''C'' as a vector subspace), ''d'' is the minimum distance of the code, and ''q'' is size of the finite field, that is, the number of symbols in the alphabet (thus, ''q'' = 2 indicates a binary code, etc.). The number of redundant bits is denoted by r = n - k. The ''standard'' form for a generator matrix is, : G = \begin I_k , P \end, where I_k is the k \times k identity m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability
Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur."Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th ed., (2009), .William Feller, ''An Introduction to Probability Theory and Its Applications'', vol. 1, 3rd ed., (1968), Wiley, . This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%). These concepts have been given an axiomatic mathematical formaliza ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Rom Varshamov
Rom Rubenovich Varshamov (Russian: Ром Рубенович Варшамов; Born April 9, 1927, in Tbilisi; Died August 24, 1999, in Moscow) was a Soviet Armenian mathematician who worked in Coding theory, especially on error-correcting codes and Number theory. Varshamov studied in Tbilisi with Arnold Walfisz (where he was Georgian students’ champion in the 100 metres), as well as in Tomsk. After that, he was a researcher in Moscow at the Steklov Institute of Mathematics with Ivan Matveyevich Vinogradov, especially on Number theory and Coding theory, and the Ministry of Radio Engineering (working in Cryptography). In 1957 he proved the Gilbert-Varshamov bound for linear codes (independently of Edgar Gilbert who proved the non-linear part). From 1968 he worked in Yerevan and was director of the Computer Centre (now Institute for Informatics and Automation Problems) of the Academy of Sciences of the Armenian SSR. He was author and co-author of more than 25 scientific artic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Error Correction Code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Greedy Method
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for the travelling salesman problem (which is of high computational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In mathematical optimization, greedy algorithms optimally solve combinatorial problems having the properties of matroids and give constant-factor approximations to optimization problems with the submodular structure. Specifics Greedy algori ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]