HOME
*





Privacy Amplification
The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby. Imagine that you have a secret key that has uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the key, and know that an adversary was able to learn the values of some bits of that key, but you do not know which bits. Can you still use your key, or do you have to throw it away and choose a new key? The leftover hash lemma tells us that we can produce a key of about bits, over which the adversary has almost no knowledge. Since the adversary knows all but bits, this is almost optimal. More precisely, the leftover hash lemma tells us that we can extract a length asymptotic to H_\infty(X) (the min-entropy of ) bits from a random variable that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about , will have almost no knowledge abou ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lemma (mathematics)
In mathematics, informal logic and argument mapping, a lemma (plural lemmas or lemmata) is a generally minor, proven proposition which is used as a stepping stone to a larger result. For that reason, it is also known as a "helping theorem" or an "auxiliary theorem". In many cases, a lemma derives its importance from the theorem it aims to prove; however, a lemma can also turn out to be more important than originally thought. The word "lemma" derives from the Ancient Greek ("anything which is received", such as a gift, profit, or a bribe). Comparison with theorem There is no formal distinction between a lemma and a theorem, only one of intention (see Theorem terminology). However, a lemma can be considered a minor result whose sole purpose is to help prove a more substantial theorem – a step in the direction of proof. Well-known lemmas A good stepping stone can lead to many others. Some powerful results in mathematics are known as lemmas, first named for their originally min ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Quantum Key Distribution
Quantum key distribution (QKD) is a secure communication method which implements a cryptographic protocol involving components of quantum mechanics. It enables two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages. It is often incorrectly called quantum cryptography, as it is the best-known example of a quantum cryptographic task. An important and unique property of quantum key distribution is the ability of the two communicating users to detect the presence of any third party trying to gain knowledge of the key. This results from a fundamental aspect of quantum mechanics: the process of measuring a quantum system in general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By using quantum superpositions or quantum entanglement and transmitting information in quantum states, a communication system can be implemented that detect ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Information-theoretic Security
A cryptosystem is considered to have information-theoretic security (also called unconditional security) if the system is secure against adversaries with unlimited computing resources and time. In contrast, a system which depends on the computational cost of cryptanalysis to be secure (and thus can be broken by an attack with unlimited computation) is called computationally, or conditionally, secure. Overview An encryption protocol with information-theoretic security is impossible to break even with infinite computational power. Protocols proven to be information-theoretically secure are resistant to future developments in computing. The concept of information-theoretically secure communication was introduced in 1949 by American mathematician Claude Shannon, one of the founders of classical information theory, who used it to prove the one-time pad system was secure. Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rényi Entropy
In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors. Definition The Rényi entro ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Min-entropy
The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the ''most likely'' outcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary or Shannon entropy (which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the ''number'' of outcomes with nonzero probability. As with the classical Shannon entropy and its quantum generalization, the von Neumann entropy, one can define a conditional version of min-entropy. The conditional quantum min-entropy is a one-shot, or conservative, analog of conditional quantum entropy. To interpret a conditional information ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Universal Hashing
In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography. Introduction Assume we want to map keys from some universe U into m bins (labelled = \). The algorithm will have to handle some data set S \subseteq U of , S, =n keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys from S that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial sett ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Distance
In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points. A distance between populations can be interpreted as measuring the distance between two probability distributions and hence they are essentially measures of distances between probability measures. Where statistical distance measures relate to the differences between random variables, these may have statistical dependence,Dodge, Y. (2003)—entry for distance and hence these distances are not directly related to measures of distances between probability measures. Again, a measure of distance between random variables may relate to the extent of dependence between them, rather than to their individual values. Statistical distance measures are not typically m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Shannon Entropy
Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum Williams (born 1998) * Shannon, intermittent stage name of English singer-songwriter Marty Wilde (born 1939) * Claude Shannon (1916-2001) was American mathematician, electrical engineer, and cryptographer known as a "father of information theory" Places Australia * Shannon, Tasmania, a locality * Hundred of Shannon, a cadastral unit in South Australia * Shannon, a former name for the area named Calomba, South Australia since 1916 * Shannon River (Western Australia) Canada * Shannon, New Brunswick, a community * Shannon, Quebec, a city * Shannon Bay, former name of Darrell Bay, British Columbia * Shannon Falls, a waterfall in British Columbia Ireland * River Shannon, the longest river in Ireland ** Shannon Cave, a subterranean section o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hash Function
A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called ''hash values'', ''hash codes'', ''digests'', or simply ''hashes''. The values are usually used to index a fixed-size table called a ''hash table''. Use of a hash function to index a hash table is called ''hashing'' or ''scatter storage addressing''. Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally and storage space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often exponential storage requirements of direct access of state spaces of large or variable-length keys. Use of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




K-independent Hashing
In computer science, a family of hash functions is said to be ''k''-independent, ''k''-wise independent or ''k''-universal if selecting a function at random from the family guarantees that the hash codes of any designated ''k'' keys are independent random variables (see precise mathematical definitions below). Such families allow good average case performance in randomized algorithms or data structures, even if the input data is chosen by an adversary. The trade-offs between the degree of independence and the efficiency of evaluating the hash function are well studied, and many ''k''-independent families have been proposed. Background The goal of hashing is usually to map keys from some large domain (universe) U into a smaller range, such as m bins (labelled = \). In the analysis of randomized algorithms and data structures, it is often desirable for the hash codes of various keys to "behave randomly". For instance, if the hash code of each key were an independent random choi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Randomness Extractor
A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakly random information entropy, entropy source, together with a short, uniformly random seed, generates a highly random output that appears Independent and identically distributed random variables, independent from the source and Uniform distribution (discrete), uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (Hardware_random_number_generator, TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source. Sometimes the term "bias" is used to denote a weakly random sou ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Random Variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a sample space (e.g., the set \) to a measurable space, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]