Triangular Network Coding
   HOME





Triangular Network Coding
In coding theory, triangular network coding (TNC) is a non-linear network coding based packet coding scheme introduced by .. Previously, packet coding for network coding was done using linear network coding (LNC). The drawback of LNC over large finite field is that it resulted in high encoding and decoding computational complexity. While linear encoding and decoding over GF(2) alleviates the concern of high computational complexity, coding over GF(2) comes at the tradeoff cost of degrading throughput performance. The main contribution of triangular network coding is to reduce the worst-case decoding computational complexity of O(n^3) to O(n^2) (where ''n'' is the total number of data packets being encoded in a coded packet) without degrading the throughput performance, with code rate comparable to that of optimal coding schemes. Triangular code has also been proposed as Fountain code to achieve near-optimal performance with encoding and decoding computational complexity of O(n\log ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error detection and correction, Error control (or ''channel coding'') # Cryptography, Cryptographic coding # Line code, Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files small ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Network Coding
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The Node (networking), nodes of a network take ''several'' packets and combine for transmission. This process may be used to attain the maximum possible information flow network, flow in a Network theory, network. It has been proven that, theoretically, linear code, linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard and even Undecidable problem, undecidable. Encodin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Finite Field
In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are the integers mod n, integers mod p when p is a prime number. The ''order'' of a finite field is its number of elements, which is either a prime number or a prime power. For every prime number p and every positive integer k there are fields of order p^k. All finite fields of a given order are isomorphism, isomorphic. Finite fields are fundamental in a number of areas of mathematics and computer science, including number theory, algebraic geometry, Galois theory, finite geometry, cryptography and coding theory. Properties A finite field is a finite set that is a fiel ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Big O Notation
Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a member of a #Related asymptotic notations, family of notations invented by German mathematicians Paul Gustav Heinrich Bachmann, Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. The letter O was chosen by Bachmann to stand for '':wikt:Ordnung#German, Ordnung'', meaning the order of approximation. In computer science, big O notation is used to Computational complexity theory, classify algorithms according to how their run time or space requirements grow as the input size grows. In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetic function, arithmetical function and a better understood approximation; one well-known exam ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


GF(2)
(also denoted \mathbb F_2, or \mathbb Z/2\mathbb Z) is the finite field with two elements. is the Field (mathematics), field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively and , as usual. The elements of may be identified with the two possible values of a bit and to the Boolean domain, Boolean values ''true'' and ''false''. It follows that is fundamental and ubiquitous in computer science and its mathematical logic, logical foundations. Definition GF(2) is the unique field with two elements with its additive identity, additive and multiplicative identity, multiplicative identities respectively denoted and . Its addition is defined as the usual addition of integers but modulo 2 and corresponds to the table below: If the elements of GF(2) are seen as Boolean values, then the addition is the same as that of the logical XOR operation. Since each element equals its opposite (m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Code Rate
In telecommunication and information theory, the code rate (or information rateHuffman, W. Cary, and Pless, Vera, ''Fundamentals of Error-Correcting Codes'', Cambridge, 2003.) of a forward error correction code is the proportion of the data-stream that is useful (non- redundant). That is, if the code rate is k/n for every bits of useful information, the coder generates a total of bits of data, of which n-k are redundant. If is the gross bit rate or data signalling rate (inclusive of redundant error coding), the net bit rate (the useful bit rate exclusive of error correction codes) is \leq R \cdot k/n. For example: The code rate of a convolutional code will typically be , , , , , etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that redundant octets (or bytes) are added to each block of 188 octets of useful information. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fountain Code
In coding theory, fountain codes (also known as rateless erasure codes) are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of size equal to or only slightly larger than the number of source symbols. The term ''fountain'' or ''rateless'' refers to the fact that these codes do not exhibit a fixed code rate. A fountain code is optimal if the original ''k'' source symbols can be recovered from any ''k'' successfully received encoding symbols (i.e., excluding those that were erased). Fountain codes are known that have efficient encoding and decoding algorithms and that allow the recovery of the original ''k'' source symbols from any ''k’'' of the encoding symbols with high probability, where ''k’'' is just slightly larger than ''k''. LT codes were the first practical realizat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Luby Transform Code
In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002. Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation (\oplus) to encode and decode the message.The ''exclusive or'' (XOR) operation, symbolized by ⊕, has the very useful property that ''A'' ⊕ ''A'' = 0, where ''A'' is an arbitrary string of bits. LT codes are ''rateless'' because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are ''erasure correcting codes'' because they can be used to t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Exclusive Or
Exclusive or, exclusive disjunction, exclusive alternation, logical non-equivalence, or logical inequality is a logical operator whose negation is the logical biconditional. With two inputs, XOR is true if and only if the inputs differ (one is true, one is false). With multiple inputs, XOR is true if and only if the number of true inputs is odd. It gains the name "exclusive or" because the meaning of "or" is ambiguous when both operands are true. XOR ''excludes'' that case. Some informal ways of describing XOR are "one or the other but not both", "either one or the other", and "A or B, but not A and B". It is symbolized by the prefix operator J Translated as and by the infix operators XOR (, , or ), EOR, EXOR, \dot, \overline, \underline, , \oplus, \nleftrightarrow, and \not\equiv. Definition The truth table of A\nleftrightarrow B shows that it outputs true whenever the inputs differ: Equivalences, elimination, and introduction Exclusive disjunction essentially ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Triangular Matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the matrix multiplication, product of a lower triangular matrix ''L'' and an upper triangular matrix ''U'' if and only if all its leading principal minor (linear algebra), minors are non-zero. Description A matrix of the form :L = \begin \ell_ & & & & 0 \\ \ell_ & \ell_ & & & \\ \ell_ & \ell_ & \ddots & & \\ \vdots & \vdots & \ddots & \ddots & \\ \ell_ & \ell_ & \ldots & \ell_ & \ell_ \end is called a lower trian ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gaussian Elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: * Swapping two rows, * Multiplying a row by a nonzero number, * Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an upper triangular matrix (possibly bordered by rows or columns of zeros), and in fact one that is in row echelon form. Once all of the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]