HOME



picture info

LDPC
Low-density parity-check (LDPC) codes are a class of error correction codes which (together with the closely-related turbo codes) have gained prominence in coding theory and information theory since the late 1990s. The codes today are widely used in applications ranging from wireless communications to flash-memory storage. Together with turbo codes, they sparked a revolution in coding theory, achieving order-of-magnitude improvements in performance compared to traditional error correction codes. Central to the performance of LDPC codes is their adaptability to the iterative belief propagation decoding algorithm. Under this algorithm, they can be designed to approach theoretical limits ( capacities) of many channels at low computation costs. Theoretically, analysis of LDPC codes focuses on sequences of codes of fixed code rate and increasing block length. These sequences are typically tailored to a set of channels. For appropriately designed sequences, the decoding erro ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Error Correction Code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Error Floor
The error floor is a phenomenon encountered in modern iterated sparse graph-based error correcting codes like LDPC codes and turbo codes. When the bit error ratio (BER) is plotted for conventional codes like Reed–Solomon codes under algebraic decoding or for convolutional codes under Viterbi decoding, the BER steadily decreases in the form of a curve as the SNR The initialism SNR may refer to: * Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio ... condition becomes better. For LDPC codes and turbo codes there is a point after which the curve does not fall as quickly as before, in other words, there is a region in which performance flattens. This region is called the ''error floor region''. The region just before the sudden drop in performance is called the ''waterfall region''. Error floors are usually attribute ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

OFDM
In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission used in digital modulation for encoding digital (binary) data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/ 5G mobile communications. OFDM is a frequency-division multiplexing (FDM) scheme that was introduced by Robert W. Chang of Bell Labs in 1966. In OFDM, the incoming bitstream representing the data to be sent is divided into multiple streams. Multiple closely spaced orthogonal subcarrier signals with overlapping spectra are transmitted, with each carrier modulated with bits from the incoming stream so multiple bits are being transmitted in parallel.
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


DVB-S2
Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) is a digital television broadcast standard that has been designed as a successor for the popular DVB-S system. It was developed in 2003 by the Digital Video Broadcasting Project, an international industry consortium, and ratified by ETSI (EN 302307) in March 2005. The standard is based on, and improves upon DVB-S and the electronic news-gathering (or Digital Satellite News Gathering) system, used by mobile units for sending sounds and images from remote locations worldwide back to their home television stations. DVB-S2 is designed for broadcast services including standard and HDTV, interactive services including Internet access, and (professional) data content distribution. The development of DVB-S2 coincided with the introduction of HDTV and H.264 (MPEG-4 AVC) video codecs. Two new key features that were added compared to the DVB-S standard are: * A powerful coding scheme based on a modern LDPC code ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polar Code (coding Theory)
In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University. Notably, polar codes have modest encoding and decoding complexity , which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Repeat-accumulate Code
In computer science, repeat-accumulate codes (RA codes) are a low complexity class of error-correcting codes. They were devised so that their ensemble weight distributions are easy to derive. RA codes were introduced by Divsalar ''et al.'' In an RA code, an information block of length is repeated times, scrambled by an interleaver of size , and then encoded by a rate 1 accumulator. The accumulator can be viewed as a truncated rate 1 recursive convolutional encoder with transfer function , but Divsalar ''et al.'' prefer to think of it as a block code whose input block and output block are related by the formula and x_i = x_+z_i for i > 1. The encoding time for RA codes is linear and their rate is 1/q. They are nonsystematic. Irregular repeat accumulate codes Irregular repeat accumulate (IRA) codes build on top of the ideas of RA codes. IRA replaces the outer code in RA code with a low density generator matrix code. IRA codes first repeats information bits different time ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bipartite Graph
In the mathematics, mathematical field of graph theory, a bipartite graph (or bigraph) is a Graph (discrete mathematics), graph whose vertex (graph theory), vertices can be divided into two disjoint sets, disjoint and Independent set (graph theory), independent sets U and V, that is, every edge (graph theory), edge connects a Vertex (graph theory), vertex in U to one in V. Vertex sets U and V are usually called the ''parts'' of the graph. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycle (graph theory), cycles. The two sets U and V may be thought of as a graph coloring, coloring of the graph with two colors: if one colors all nodes in U blue, and all nodes in V red, each edge has endpoints of differing colors, as is required in the graph coloring problem.. In contrast, such a coloring is impossible in the case of a non-bipartite graph, such as a Gallery of named graphs, triangle: after one node is colored blue and another red, the third vertex ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error detection and correction, Error control (or ''channel coding'') # Cryptography, Cryptographic coding # Line code, Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files small ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Tanner Graph
{{Short description, Bipartite graph in coding theory In coding theory, a Tanner graph is a bipartite graph that can be used to express constraints (typically equations) that specify an error correcting codes, error correcting code. Tanner graphs play a central role in the design and decoding of Low-density parity-check code, LDPC codes. They have also been applied to the construction of longer codes from smaller ones. Both encoders and decoders employ these graphs extensively. Origins Tanner graphs were proposed by Michael Tanner as a means to create larger error correcting codes from smaller ones using recursive techniques. He generalized the techniques of Peter Elias, Elias for product codes. Tanner discussed lower bounds on the codes obtained from these graphs irrespective of the specific characteristics of the codes which were being used to construct larger codes. Tanner graphs for linear block codes Tanner graphs are bipartite graph, partitioned into subcode nodes an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Block Length
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of ''all'' block codes in a unified way. Such limitations often take the form of ''bounds'' that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, Reed–Muller codes and Polar codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using Bo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

10GBASE-T
10 Gigabit Ethernet (abbreviated 10GE, 10GbE, or 10 GigE) is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10  gigabits per second. It was first defined by the IEEE 802.3ae-2002 standard. Unlike previous Ethernet standards, 10GbE defines only full-duplex point-to-point links which are generally connected by network switches; shared-medium CSMA/CD operation has not been carried over from the previous generations of Ethernet standards so half-duplex operation and repeater hubs do not exist in 10GbE. The first standard for faster 100 Gigabit Ethernet links was approved in 2010. The 10GbE standard encompasses a number of different physical layer (PHY) standards. A networking device, such as a switch or a network interface controller may have different PHY types through pluggable PHY modules, such as those based on SFP+. Like previous versions of Ethernet, 10GbE can use either copper or fiber cabling. Maximum distance over copp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]