HOME

TheInfoList



OR:

In mathematics and theoretical computer science, entropy compression is an information theoretic method for proving that a
random process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stoc ...
terminates, originally used by Robin Moser to prove an algorithmic version of the
Lovász local lemma In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a s ...
..


Description

To use this method, one proves that the history of the given process can be recorded in an efficient way, such that the state of the process at any past time can be recovered from the current state and this record, and such that the amount of additional information that is recorded at each step of the process is (on average) less than the amount of new information randomly generated at each step. The resulting growing discrepancy in total information content can never exceed the fixed amount of information in the current state, from which it follows that the process must eventually terminate. This principle can be formalized and made rigorous using
Kolmogorov complexity In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that prod ...
..


Example

An example given by both Fortnow and Tao concerns the
Boolean satisfiability problem In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an Interpretation (logic), interpretation that Satisf ...
for Boolean formulas in
conjunctive normal form In Boolean algebra, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. In au ...
, with uniform clause size. These problems can be parameterized by two numbers (k,t) where k is the number of variables per clause and t is the maximum number of different clauses that any variable can appear in. If the variables are assigned to be true or false randomly, then the event that a clause is unsatisfied happens with probability 2^ and each event is independent of all but r=k(t-1) other events. It follows from the
Lovász local lemma In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a s ...
that, if t is small enough to make r<2^k/e (where e is the base of the
natural logarithm The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approxima ...
) then a solution always exists. The following algorithm can be shown using entropy compression to find such a solution for inputs that obey a constraint on r of a similar form. *Choose a random truth assignment *While there exists an unsatisfied clause C, call a recursive subroutine fix with C as its argument. This subroutine chooses a new random truth assignment for the variables in C, and then recursively calls the same subroutine on all unsatisfied clauses (possibly including C itself) that share a variable until none are left. This algorithm cannot terminate unless the input formula is satisfiable, so a proof that it terminates is also a proof that a solution exists. Each iteration of the outer loop reduces the number of unsatisfied clauses (it causes C to become satisfied without making any other clause become unsatisfied) so the key question is whether the fix subroutine terminates or whether it can get into an
infinite recursion In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves ...
. To answer this question, consider on the one hand the number of random bits generated in each iteration of the fix subroutine (k bits per clause) and on the other hand the number of bits needed to record the history of this algorithm in such a way that any past state can be generated. To record this history, we may store the current truth assignment (n bits), the sequence of arguments to the outermost calls to the fix subroutine (at most m\log_2m bits, where m is the number of clauses in the input), and then a sequence of records that either indicate that a recursive call to fix returned or that it made a recursive call, and which of the r+1 clauses sharing a variable with the current clause was passed as an argument to that recursive call. There are r+2 possible outcomes per record, so the number of bits needed to store a record (using a compact encoding method such as
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a String (computer science), string of characters is represented using a fixed number of bits per character, as in the American Standard Code for In ...
that allows storage using a fractional number of bits per record) This information is enough to recover the entire sequence of calls to fix, including the identity of the clause given as an argument to each call. To do so, progress forward through the call sequence using the stored outermost call arguments and stored records to infer the argument of each call in turn. Once the sequence of recursive calls and their arguments has been recovered, the truth assignments at each stage of this process can also be recovered from the same information. To do so, start from the final truth assignment and then progress backwards through the sequence of randomly reassigned clauses, using the fact that each clause was previously unsatisfiable to uniquely determine the values of all of its variables prior to its random reassignment. Thus, after f calls to fix, the algorithm will have generated fk random bits but its entire history (including those generated bits) can be recovered from a record that uses only m\log_2 m+n+f\log_2(r + 2) bits. This is only possible when the number of stored bits is at least as large as the number of randomly generated bits. It follows that, when r is small enough to make \log_2(r + 2), that is, when r<2^k-2, the fix subroutine can only perform O(m\log_2 m+n) recursive calls over the course of the whole algorithm.


History

The name "entropy compression" was given to this method in a blog posting by
Terence Tao Terence Chi-Shen Tao (; born 17 July 1975) is an Australian-American mathematician, Fields medalist, and professor of mathematics at the University of California, Los Angeles (UCLA), where he holds the James and Carol Collins Chair in the Co ...
. and has since been used for it by other researchers.. Moser's original version of the
algorithmic Lovász local lemma Algorithmic may refer to: *Algorithm, step-by-step instructions for a calculation **Algorithmic art, art made by an algorithm **Algorithmic composition, music made by an algorithm **Algorithmic trading, trading decisions made by an algorithm **Algo ...
, using this method, achieved weaker bounds than the original
Lovász local lemma In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a s ...
, which was originally formulated as an
existence theorem In mathematics, an existence theorem is a theorem which asserts the existence of a certain object. It might be a statement which begins with the phrase " there exist(s)", or it might be a universal statement whose last quantifier is existential ...
without a constructive method for finding the object whose existence it proves. Later, Moser and Gábor Tardos used the same method to prove a version of the algorithmic Lovász local lemma that matches the bounds of the original lemma.. Since the discovery of the entropy compression method, it has also been used to achieve stronger bounds for some problems than would be given by the Lovász local lemma. For example, for the problem of acyclic
edge coloring In graph theory, a proper edge coloring of a graph is an assignment of "colors" to the edges of the graph so that no two incident edges have the same color. For example, the figure to the right shows an edge coloring of a graph by the colors red ...
of graphs with maximum degree \Delta, it was first shown using the local lemma that there always exists a coloring with 64\Delta colors, and later using a stronger version of the local lemma this was improved to 9.62\Delta. However, a more direct argument using entropy compression shows that there exists a coloring using only 4\Delta-4 colors, and moreover this coloring can be found in randomized polynomial time.


References

{{reflist Randomized algorithms Analysis of algorithms