exponential time hypothesis
   HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
, the exponential time hypothesis is an unproven
computational hardness assumption In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in polynomial time"). It is not known how to prove (unconditi ...
that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved in subexponential time, 2^. More precisely, the usual form of the hypothesis asserts the existence of a number s_3 > 0 such that all algorithms that correctly solve this problem require time at least 2^. The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement. It implies that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do, and that many known algorithms for these problems have optimal or near-optimal time


Definition

The problem is a version of the
Boolean satisfiability problem In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an Interpretation (logic), interpretation that Satisf ...
in which the input to the problem is a
Boolean expression In computer science, a Boolean expression (also known as logical expression) is an expression used in programming languages that produces a Boolean value when evaluated. A Boolean value is either true or false. A Boolean expression may be compos ...
in
conjunctive normal form In Boolean algebra, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. In au ...
(that is, an ''and'' of ''ors'' of variables and their negations) with at most k variables per clause. The goal is to determine whether this expression can be made to be true by some assignment of Boolean values to its variables. 2-SAT has a
linear time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations ...
algorithm, but all known algorithms for larger k take
exponential time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations p ...
, with the base of the exponential function depending on k. For instance, the WalkSAT probabilistic algorithm can solve in average time \left(2-\frac2k\right)^n n^, where n is the number of variables in the given For each define s_k to be the smallest number such that can be solved in This minimum might not exist, if a sequence of better and better algorithms have correspondingly smaller exponential growth in their time bounds; in that case, define s_k to be the
infimum In mathematics, the infimum (abbreviated inf; : infima) of a subset S of a partially ordered set P is the greatest element in P that is less than or equal to each element of S, if such an element exists. If the infimum of S exists, it is unique ...
of the real numbers \delta for which can be solved in Because problems with larger k cannot be easier, these numbers are ordered as and because of WalkSAT they are at most s_k\le \log_2 \left(2-\frac2k\right)<1. The exponential time hypothesis is the
conjecture In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), ha ...
that they are all nonzero, or equivalently, that the smallest of them, is Some sources define the exponential time hypothesis to be the slightly weaker statement that 3-SAT cannot be solved in If there existed an algorithm to solve 3-SAT in then s_3 would equal zero. However, it is consistent with current knowledge that there could be a sequence of 3-SAT algorithms, each with running time O(2^) for a sequence of numbers \delta_i tending towards zero, but where the descriptions of these algorithms are so quickly growing that a single algorithm could not automatically select and run the most appropriate one. If this were to be the case, then s_3 would equal zero even though there would be no single algorithm running in A related variant of the exponential time hypothesis is the non-uniform exponential time hypothesis, which posits that there is no family of algorithms (one for each length of the input, in the spirit of advice) that can solve 3-SAT in Because the numbers s_3,s_4,\dots form a monotonic sequence that is bounded above by one, they must converge to a limit s_\infty=\lim_ s_k. The strong exponential time hypothesis (SETH) is the conjecture


Implications


Satisfiability

It is not possible for s_k to equal s_\infty for any as showed, there exists a constant \alpha such Therefore, if the exponential time hypothesis is true, there must be infinitely many values of k for which s_k differs An important tool in this area is the sparsification lemma of , which shows that, for any formula can be replaced by O(2^) simpler formulas in which each variable appears only a constant number of times, and therefore in which the number of clauses is linear. The sparsification lemma is proven by repeatedly finding large sets of clauses that have a nonempty common intersection in a given formula, and replacing the formula by two simpler formulas, one of which has each of these clauses replaced by their common intersection and the other of which has the intersection removed from each clause. By applying the sparsification lemma and then using new variables to split the clauses, one may then obtain a set of O(2^) 3-CNF formulas, each with a linear number of variables, such that the original formula is satisfiable if and only if at least one of these 3-CNF formulas is satisfiable. Therefore, if 3-SAT could be solved in subexponential time, one could use this reduction to solve in subexponential time as well. Equivalently, for then s_3>0 as well, and the exponential time hypothesis would be The limiting value s_\infty of the sequence of numbers s_k is at most equal where s_ is the infimum of the numbers \delta such that satisfiability of conjunctive normal form formulas without clause length limits can be solved in Therefore, if the strong exponential time hypothesis is true, then there would be no algorithm for general CNF satisfiability that is significantly faster than a
brute-force search In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of Iteration#Computing, systematically checking all possible candida ...
over all possible truth assignments. However, if the strong exponential time hypothesis fails, it would still be possible for s_ to equal


Other search problems

The exponential time hypothesis implies that many other problems in the complexity class SNP do not have algorithms whose running time is faster than c^n for some These problems include graph -colorability, finding
Hamiltonian cycle In the mathematics, mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path (graph theory), path in an undirected or directed graph that visits each vertex (graph theory), vertex exactly once. A Hamiltonian cycle (or ...
s, maximum cliques, maximum independent sets, and vertex cover on graphs. Conversely, if any of these problems has a subexponential algorithm, then the exponential time hypothesis could be shown to be If cliques or independent sets of logarithmic size could be found in polynomial time, the exponential time hypothesis would be false. Therefore, even though finding cliques or independent sets of such small size is unlikely to be NP-complete, the exponential time hypothesis implies that these problems are More generally, the exponential time hypothesis implies that it is not possible to find cliques or independent sets of size k in The exponential time hypothesis also implies that it is not possible to solve the -SUM problem (given n real numbers, find k of them that add to zero) in The strong exponential time hypothesis implies that it is not possible to find dominating sets more quickly than in The exponential time hypothesis implies also that the weighted feedback arc set problem on
tournaments A tournament is a competition involving at least three competitors, all participating in a sport or game. More specifically, the term may be used in either of two overlapping senses: # One or more competitions held at a single venue and concentr ...
does not have a parametrized algorithm with running It does however have a parameterized algorithm with running The strong exponential time hypothesis leads to tight bounds on the
parameterized complexity In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. ...
of several graph problems on graphs of bounded
treewidth In graph theory, the treewidth of an undirected graph is an integer number which specifies, informally, how far the graph is from being a tree. The smallest treewidth is 1; the graphs with treewidth 1 are exactly the trees and the forests ...
. In particular, if the strong exponential time hypothesis is true, then the optimal time bound for finding independent sets on graphs of the optimal time for the
dominating set In graph theory, a dominating set for a Graph (discrete mathematics), graph is a subset of its vertices, such that any vertex of is in , or has a neighbor in . The domination number is the number of vertices in a smallest dominating set for ...
problem the optimum time for
maximum cut In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets and , such that the number of edges between and is as large as possible. ...
and the optimum time for Equivalently, any improvement on these running times would falsify the strong exponential time The exponential time hypothesis also implies that any fixed-parameter tractable algorithm for edge clique cover must have
double exponential Double exponential may refer to: * A double exponential function ** Double exponential time, a task with time complexity roughly proportional to such a function ** 2-EXPTIME, the complexity class of decision problems solvable in double-exponentia ...
dependence on the


Communication complexity

In the three-party set disjointness problem in communication complexity, three subsets of the integers in some are specified, and three communicating parties each know two of the three subsets. The goal is for the parties to transmit as few bits to each other on a shared communications channel in order for one of the parties to be able to determine whether the intersection of the three sets is empty or nonempty. A trivial communications protocol would be for one of the three parties to transmit a
bitvector A bit array (also known as bitmask, bit map, bit set, bit string, or bit vector) is an array data structure that compactly stores bits. It can be used to implement a simple set data structure. A bit array is effective at exploiting bit-level p ...
describing the intersection of the two sets known to that party, after which either of the two remaining parties can determine the emptiness of the intersection. However, if there exists a protocol that solves the problem with and it could be transformed into an algorithm for solving in for any fixed violating the strong exponential time hypothesis. Therefore, the strong exponential time hypothesis implies either that the trivial protocol for three-party set disjointness is optimal, or that any better protocol requires an exponential amount of


Structural complexity

If the exponential time hypothesis is true, then 3-SAT would not have a polynomial time algorithm, and therefore it would follow that P ≠ NP. More strongly, in this case, 3-SAT could not even have a quasi-polynomial time algorithm, so NP could not be a subset of QP. However, if the exponential time hypothesis fails, it would have no implication for the P versus NP problem. A padding argument proves the existence of NP-complete problems for which the best known running times have the form O(2^) and if the best possible running time for 3-SAT were of this form, then P would be unequal to NP (because 3-SAT is NP-complete and this time bound is not polynomial) but the exponential time hypothesis would be false. In parameterized complexity theory, because the exponential time hypothesis implies that there does not exist a fixed-parameter-tractable algorithm for maximum clique, it also implies that It is an important open problem in this area whether this implication can be reversed: does imply the exponential time hypothesis? There is a hierarchy of parameterized complexity classes called the M-hierarchy that interleaves the W-hierarchy in the sense that, for for instance, the problem of finding a vertex cover of size k\log n in an graph with is complete The exponential time hypothesis is equivalent to the statement that , and the question of whether \mathsf subseteq\mathsf /math> for i>1 is also It is also possible to prove implications in the other direction, from the failure of a variation of the strong exponential time hypothesis to separations of complexity classes. As shows, if there exists an algorithm A that solves Boolean circuit satisfiability in time 2^n/f(n) for some superpolynomially growing then
NEXPTIME In computational complexity theory, the complexity class NEXPTIME (sometimes called NEXP) is the set of decision problems that can be solved by a non-deterministic Turing machine using time 2^. In terms of NTIME, :\mathsf = \bigcup_ \mathsf(2^) ...
is not a subset of P/poly. Williams shows that, if algorithm A exists, and a family of circuits simulating NEXPTIME in P/poly also existed, then algorithm A could be composed with the circuits to simulate NEXPTIME problems nondeterministically in a smaller amount of time, violating the time hierarchy theorem. Therefore, the existence of algorithm A proves the nonexistence of the family of circuits and the separation of these two complexity


See also

* Savitch's theorem, showing that a similar exponential gap cannot hold for
space complexity The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it exec ...


Notes


Further reading

* {{Computational hardness assumptions Computational hardness assumptions