HOME
*





List Decoding
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding. The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder out ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error control (or ''channel coding'') # Cryptographic coding # Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, ZIP data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compression and er ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Venkatesan Guruswami
Venkatesan Guruswami (born 1976) is a senior scientist at the Simons Institute for the Theory of Computing and Professor of EECS and Mathematics at the University of California, Berkeley. He did his high schooling at Padma Seshadri Bala Bhavan in Chennai, India. He completed his undergraduate in Computer Science from IIT Madras and his doctorate from Massachusetts Institute of Technology under the supervision of Madhu Sudan in 2001. After receiving his PhD, he spent a year at UC Berkeley as a Miller Fellow, and then was a member of the faculty at the University of Washington from 2002 to 2009. His primary area of research is computer science, and in particular on error-correcting codes. During 2007–2008, he visited the Institute for Advanced Study as a Member of School of Mathematics. He also visited SCS at Carnegie Mellon University during 2008–09 as a visiting faculty. From July 2009 through December 2020 he was a faculty member in the Computer Science Department in the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error control (or ''channel coding'') # Cryptographic coding # Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, ZIP data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compression and er ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Luca Trevisan
Luca Trevisan (21 July 1971) is an Italian professor of computer science at Bocconi University in Milan. His research area is theoretical computer science, focusing on randomness, cryptography, probabilistically checkable proofs, approximation, property testing, spectral graph theory, and sublinear algorithms. He also runs a blog, in theory', about theoretical computer science. Education and career Trevisan received his PhD from La Sapienza, Rome, under the supervision of Pierluigi Crescenzi. After postdoctoral studies at the Massachusetts Institute of Technology and DIMACS, he held an assistant professor position at Columbia University before moving to the University of California, Berkeley and then, in 2010, to Stanford. In 2014 he returned to Berkeley, and in 2019 he moved to the Department of Decision Sciences at Bocconi University. Recognition Trevisan won the Danny Lewin Best Student Paper Award at the 1997 Symposium on Theory of Computing, the Oberwolfach Prize in 2 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Pseudorandom Generator
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution. Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size. It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory. Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions. Definition Let \mathcal A = \ be a class of functions. The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Extractor (mathematics)
{{Short description, Bipartite graph with nodes An (N,M,D,K,\epsilon) -extractor is a bipartite graph with N nodes on the left and M nodes on the right such that each node on the left has D neighbors (on the right), which has the added property that for any subset A of the left vertices of size at least K, the distribution on right vertices obtained by choosing a random node in A and then following a random edge to get a node x on the right side is \epsilon-close to the uniform distribution in terms of total variation distance. A disperser is a related graph. An equivalent way to view an extractor is as a bivariate function :E : \times \rightarrow /math> in the natural way. With this view it turns out that the extractor property is equivalent to: for any source of randomness X that gives n bits with min-entropy \log K, the distribution E(X,U_D) is \epsilon-close to U_M, where U_T denotes the uniform distribution on /math>. Extractors are interesting when they can be ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Permanent (mathematics)
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant. Definition The permanent of an matrix is defined as \operatorname(A)=\sum_\prod_^n a_. The sum here extends over all elements σ of the symmetric group ''S''''n''; i.e. over all permutations of the numbers 1, 2, ..., ''n''. For example, \operatorname\begina&b \\ c&d\end=ad+bc, and \operatorname\begina&b&c \\ d&e&f \\ g&h&i \end=aei + bfg + cdh + ceg + bdi + afh. The definition of the permanent of ''A'' differs from that of the determinant of ''A'' in that the signatures of the permutations are not taken into account. The permanent of a matrix A is denoted per ''A'', perm ''A'', or Per ''A'', sometimes with parentheses around the argument. Minc uses Per(''A'') for the permanent of rectangular m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


One-way Function
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way (see Theoretical definition, below). The existence of such one-way functions is still an open conjecture. Their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science. Oded Goldreich (2001). Foundations of Cryptography: Volume 1, Basic Tools,draft availablefrom author's site). Cambridge University Press. . (see als The converse is not known to be true, i.e. the existence of a proof that P≠NP would not directly imply the existence of one-way functions. In applied contexts, the terms "easy" and "hard" ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hard-core Predicate
In cryptography, a hard-core predicate of a one-way function ''f'' is a predicate ''b'' (i.e., a function whose output is a single bit) which is easy to compute (as a function of ''x'') but is hard to compute given ''f(x)''. In formal terms, there is no probabilistic polynomial-time (PPT) algorithm that computes ''b(x)'' from ''f(x)'' with probability significantly greater than one half over random choice of ''x''. Goldwasser, S. and Bellare, M.br>"Lecture Notes on Cryptography". Summer course on cryptography, MIT, 1996-2001 In other words, if ''x'' is drawn uniformly at random, then given ''f(x)'', any PPT adversary can only distinguish the hard-core bit ''b(x)'' and a uniformly random bit with negligible advantage over the length of ''x''. A hard-core function can be defined similarly. That is, if ''x'' is chosen uniformly at random, then given ''f(x)'', any PPT algorithm can only distinguish the hard-core function value ''h(x)'' and uniformly random bits of length '', h(x), ' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cryptography
Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security ( data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. Cryptography prior to the modern age was effectively synonymo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Analysis Of Algorithms
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a br ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Folded Reed–Solomon Code
In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping m Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols. Folded Reed–Solomon codes are also a special case of Parvaresh–Vardy codes. Using optimal parameters one can decode with a rate of ''R'', and achieve a decoding radius of 1 − ''R''. The term "folded Reed–Solomon codes" was coined in a paper by V.Y. Krachkovsky with an algorithm that presented Reed–Solomon codes with many random "phased burst" error The list-decoding algorithm for folded RS codes corrects beyond the 1-\sqrt bound for Reed–Solomon codes achieved by the Guruswami– Sudan algorithm for such phased burst errors. History One of the ongoing challenges in Coding Theory is to have error correcting codes achieve an optimal trade-off between (Coding) Rate and Error-Correction Radius. Though this may not be possible to achieve practically (due to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]