HOME

TheInfoList



OR:

In
theoretical computer science Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumsc ...
, the term isolation lemma (or isolating lemma) refers to
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
s that reduce the number of solutions to a problem to one, should a solution exist. This is achieved by constructing random constraints such that, with non-negligible probability, exactly one solution satisfies these additional constraints if the solution space is not empty. Isolation lemmas have important applications in computer science, such as the
Valiant–Vazirani theorem The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP =  RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper ...
and
Toda's theorem Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize. Statement The theorem states that the entire polyno ...
in
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
. The first isolation lemma was introduced by , albeit not under that name. Their isolation lemma chooses a random number of random hyperplanes, and has the property that, with non-negligible probability, the intersection of any fixed non-empty solution space with the chosen hyperplanes contains exactly one element. This suffices to show the
Valiant–Vazirani theorem The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP =  RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper ...
: there exists a randomized
polynomial-time reduction In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming ...
from the satisfiability problem for Boolean formulas to the problem of detecting whether a Boolean formula has a unique solution. introduced an isolation lemma of a slightly different kind: Here every coordinate of the solution space gets assigned a random weight in a certain range of integers, and the property is that, with non-negligible probability, there is exactly one element in the solution space that has minimum weight. This can be used to obtain a randomized parallel algorithm for the
maximum matching Maximum cardinality matching is a fundamental problem in graph theory. We are given a graph , and the goal is to find a matching containing as many edges as possible; that is, a maximum cardinality subset of the edges such that each vertex is adj ...
problem. Stronger isolation lemmas have been introduced in the literature to fit different needs in various settings. For example, the isolation lemma of has similar guarantees as that of Mulmuley et al., but it uses fewer random bits. In the context of the
exponential time hypothesis In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential t ...
, prove an isolation lemma for k-CNF formulas. Noam Ta-ShmaNoam Ta-Shma (2015)
''A simple proof of the Isolation Lemma''
in ''eccc''
gives an isolation lemma with slightly stronger parameters, and gives non-trivial results even when the size of the weight domain is smaller than the number of variables.


The isolation lemma of Mulmuley, Vazirani, and Vazirani

:Lemma. Let n and N be positive integers, and let \mathcal F be an arbitrary nonempty family of subsets of the universe \. Suppose each element x\in\ in the universe receives an integer weight w(x), each of which is chosen independently and uniformly at random from \. The weight of a set ''S'' in \mathcal F is defined as ::w(S) = \sum_ w(x)\,. :Then, with probability at least 1-n/N, there is a ''unique'' set in \mathcal F that has the minimum weight among all sets of \mathcal F. It is remarkable that the lemma assumes nothing about the nature of the family \mathcal F: for instance \mathcal F may include ''all'' 2^n-1 nonempty subsets. Since the weight of each set in \mathcal F is between 1 and nN on average there will be (2^n-1) / (nN) sets of each possible weight. Still, with high probability, there is a unique set that has minimum weight.


Mulmuley, Vazirani, and Vazirani's proof

Suppose we have fixed the weights of all elements except an element ''x''. Then ''x'' has a ''threshold'' weight ''α'', such that if the weight ''w''(''x'') of ''x'' is greater than ''α'', then it is not contained in any minimum-weight subset, and if w(x) \le \alpha, then it is contained in some sets of minimum weight. Further, observe that if w(x) < \alpha, then ''every'' minimum-weight subset must contain ''x'' (since, when we decrease ''w(x)'' from ''α'', sets that do not contain ''x'' do not decrease in weight, while those that contain ''x'' do). Thus, ambiguity about whether a minimum-weight subset contains ''x'' or not can happen only when the weight of ''x'' is exactly equal to its threshold; in this case we will call ''x'' "singular". Now, as the threshold of ''x'' was defined only in terms of the weights of the ''other'' elements, it is independent of ''w(x)'', and therefore, as ''w''(''x'') is chosen uniformly from , :\Pr \text= \Pr (x) = \alpha\le 1/N and the probability that ''some'' ''x'' is singular is at most ''n/N''. As there is a unique minimum-weight subset
iff In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bicon ...
no element is singular, the lemma follows. Remark: The lemma holds with \le (rather than =) since it is possible that some ''x'' has no threshold value (i.e., ''x'' will not be in any minimum-weight subset even if ''w''(''x'') gets the minimum possible value, 1).


Joel Spencer's proof

This is a restatement version of the above proof, due to
Joel Spencer Joel Spencer (born April 20, 1946) is an American mathematician. He is a combinatorialist who has worked on probabilistic methods in combinatorics and on Ramsey theory. He received his doctorate from Harvard University in 1970, under the supervi ...
(1995). For any element ''x'' in the set, define :\alpha(x) = \min_w(S) - \min_w(S\setminus\). Observe that \alpha(x) depends only on the weights of elements other than ''x'', and not on ''w''(''x'') itself. So whatever the value of \alpha(x), as ''w''(''x'') is chosen uniformly from , the probability that it is equal to \alpha(x) is at most 1/''N''. Thus the probability that w(x) = \alpha(x) for ''some'' ''x'' is at most ''n/N''. Now if there are two sets ''A'' and ''B'' in \mathcal F with minimum weight, then, taking any ''x'' in ''A\B'', we have :\begin \alpha(x) &= \min_w(S) - \min_w(S\setminus\) \\ &= w(B) - (w(A)-w(x)) \\ &= w(x), \end and as we have seen, this event happens with probability at most ''n/N''.


Examples/applications

* The original application was to minimum-weight (or maximum-weight) perfect matchings in a graph. Each edge is assigned a random weight in , and \mathcal F is the set of perfect matchings, so that with probability at least 1/2, there exists a unique perfect matching. When each indeterminate x_ in the
Tutte matrix In graph theory, the Tutte matrix ''A'' of a graph ''G'' = (''V'', ''E'') is a matrix used to determine the existence of a perfect matching: that is, a set of edges which is incident with each vertex exactly once. If the set of vert ...
of the graph is replaced with 2^ where w_ is the random weight of the edge, we can show that the determinant of the matrix is nonzero, and further use this to find the matching. * More generally, the paper also observed that any search problem of the form "Given a set system (S,\mathcal F), find a set in \mathcal F" could be reduced to a decision problem of the form "Is there a set in \mathcal F with total weight at most ''k''?". For instance, it showed how to solve the following problem posed by Papadimitriou and Yannakakis, for which (as of the time the paper was written) no deterministic polynomial-time algorithm is known: given a graph and a subset of the edges marked as "red", find a perfect matching with exactly ''k'' red edges. * The
Valiant–Vazirani theorem The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP =  RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper ...
, concerning unique solutions to NP-complete problems, has a simpler proof using the isolation lemma. This is proved by giving a randomized reduction from
CLIQUE A clique ( AusE, CanE, or ), in the social sciences, is a group of individuals who interact with one another and share similar interests. Interacting with cliques is part of normative social development regardless of gender, ethnicity, or popular ...
to UNIQUE-CLIQUE. * use the proof of Valiant-Vazirani in their search-to-decision reduction for
average-case complexity In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case comp ...
. *
Avi Wigderson Avi Wigderson ( he, אבי ויגדרזון; born 9 September 1956) is an Israeli mathematician and computer scientist. He is the Herbert H. Maass Professor in the school of mathematics at the Institute for Advanced Study in Princeton, New Jerse ...
used the isolation lemma in 1994 to give a randomized reduction from NL to UL, and thereby prove that NL/poly ⊆ ⊕L/poly. Reinhardt and Allender later used the isolation lemma again to prove that NL/poly = UL/poly. * The book by Hemaspaandra and Ogihara has a chapter on the isolation technique, including generalizations. * The isolation lemma has been proposed as the basis of a scheme for
digital watermarking A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio, video or image data. It is typically used to identify ownership of the copyright of such signal. "Watermarking" is the process of hiding digital inf ...
. * There is ongoing work on derandomizing the isolation lemma in specific cases and on using it for identity testing.


Notes


References

* * * * * * * * * * * *


External links


Favorite Theorems: Unique Witnesses
by
Lance Fortnow Lance Jeremy Fortnow (born August 15, 1963) is a computer scientist known for major results in computational complexity and interactive proof systems. He is currently Dean of the College of Computing at the Illinois Institute of Technology. Biogra ...

The Isolation Lemma and Beyond
by
Richard J. Lipton Richard Jay Lipton (born September 6, 1946) is an Americans, American computer scientist who is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technolo ...
{{DEFAULTSORT:Isolation Lemma Probability theorems Combinatorics Lemmas