HOME

TheInfoList



OR:

The leftover hash lemma is a
lemma Lemma may refer to: Language and linguistics * Lemma (morphology), the canonical, dictionary or citation form of a word * Lemma (psycholinguistics), a mental abstraction of a word about to be uttered Science and mathematics * Lemma (botany), a ...
in cryptography first stated by
Russell Impagliazzo Russell Graham Impagliazzo is a professor of computer science at the University of California, San Diego specializing in computational complexity theory, having joined the faculty of UCSD in 1989. He received a BA in mathematics from Wesleyan U ...
, Leonid Levin, and Michael Luby. Imagine that you have a secret
key Key or The Key may refer to: Common meanings * Key (cryptography), a piece of information that controls the operation of a cryptography algorithm * Key (lock), device used to control access to places or facilities restricted by a lock * Key (map ...
that has uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the key, and know that an adversary was able to learn the values of some bits of that key, but you do not know which bits. Can you still use your key, or do you have to throw it away and choose a new key? The leftover hash lemma tells us that we can produce a key of about bits, over which the adversary has almost no knowledge. Since the adversary knows all but bits, this is almost optimal. More precisely, the leftover hash lemma tells us that we can extract a length asymptotic to H_\infty(X) (the min-entropy of ) bits from a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about , will have almost no knowledge about the extracted value. That is why this is also called privacy amplification (see privacy amplification section in the article Quantum key distribution). Randomness extractors achieve the same result, but use (normally) less randomness. Let be a random variable over \mathcal and let m > 0. Let h\colon \mathcal \times \mathcal \rightarrow \^m be a 2-universal hash function. If :m \leq H_\infty(X) - 2 \log\left(\frac\right) then for uniform over \mathcal and independent of , we have: :\delta\left h(S, X), S), (U, S)\right\leq \varepsilon. where is uniform over \^m and independent of . H_\infty(X) = -\log \max_x \Pr =x/math> is the min-entropy of , which measures the amount of randomness has. The min-entropy is always less than or equal to the Shannon entropy. Note that \max_x \Pr =x/math> is the probability of correctly guessing . (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess . 0 \le \delta(X, Y) = \frac \sum_v \left, \Pr =v- \Pr =v\ \le 1 is a
statistical distance In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be be ...
between and .


See also

* Universal hashing * Min-entropy *
Rényi entropy In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for th ...
* Information-theoretic security


References

{{Reflist
C. H. Bennett, G. Brassard, and J. M. Robert. ''Privacy amplification by public discussion''. SIAM Journal on Computing, 17(2):210-229, 1988.C. Bennett, G. Brassard, C. Crepeau, and U. Maurer. ''Generalized privacy amplification''. IEEE Transactions on Information Theory, 41, 1995.J. Håstad, R. Impagliazzo, L. A. Levin and M. Luby. ''A Pseudorandom Generator from any One-way Function''. SIAM Journal on Computing, v28 n4, pp. 1364-1396, 1999.
Theory of cryptography Probability theorems