SNP (complexity)
   HOME





SNP (complexity)
In computational complexity theory, SNP (from Strict NP) is a complexity class containing a limited subset of NP based on its logical characterization in terms of graph-theoretical properties. It forms the basis for the definition of the class MaxSNP of optimization problems. It is defined as the class of problems that are properties of relational structures (such as graphs) expressible by a second-order logic formula of the following form: : \exists S_1 \dots \exists S_\ell \, \forall v_1 \dots \forall v_m \,\phi(R_1,\dots,R_k,S_1,\dots,S_\ell,v_1,\dots,v_m) where R_1,\dots,R_k are relations of the structure (such as the adjacency relation, for a graph), S_1,\dots,S_\ell are unknown relations (sets of tuples of vertices), and \phi is a quantifier-free formula: any boolean combination of the relations. That is, only existential second-order quantification (over relations) is allowed and only universal first-order quantification (over vertices) is allowed. If existential quantific ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Boolean Satisfiability Problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an Interpretation (logic), interpretation that Satisfiability, satisfies a given Boolean logic, Boolean Formula (mathematical logic), formula. In other words, it asks whether the formula's variables can be consistently replaced by the values TRUE or FALSE to make the formula evaluate to TRUE. If this is the case, the formula is called ''satisfiable'', else ''unsatisfiable''. For example, the formula "''a'' AND NOT ''b''" is satisfiable because one can find the values ''a'' = TRUE and ''b'' = FALSE, which make (''a'' AND NOT ''b'') = TRUE. In contrast, "''a'' AND NOT ''a''" is unsatisfiable. SAT is the first problem that was proven to be NP-complete—this is the Cook–Levin theorem. This means that all problems in the complexity class NP (complexity), NP, which includes a wide range of natu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


PCP Theorem
In computational complexity theory, the PCP theorem (also known as the PCP characterization theorem) states that every decision problem in the NP complexity class has probabilistically checkable proofs ( proofs that can be checked by a randomized algorithm) of constant query complexity and logarithmic randomness complexity (uses a logarithmic number of random bits). The PCP theorem says that for some universal constant K, for every n, any mathematical proof for a statement of length n can be rewritten as a different proof of length \operatorname(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof. The PCP theorem is the cornerstone of the theory of computational hardness of approximation, which investigates the inherent difficulty in designing efficient approximation algorithms for various optimization problems. It has been described by Ingo Wegener as "the most important result in complexity theory since C ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximation-preserving Reduction
In computability theory and computational complexity theory, especially the study of approximation algorithms, an approximation-preserving reduction is an algorithm for transforming one optimization problem into another problem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reductions are a subset of more general reductions in complexity theory; the difference is that approximation-preserving reductions usually make statements on approximation problems or optimization problems, as opposed to decision problems. Intuitively, problem A is reducible to problem B via an approximation-preserving reduction if, given an instance of problem A and a (possibly approximate) solver for problem B, one can convert the instance of problem A into an instance of problem B, apply the solver for problem B, and recover a solution for problem A that also has some guarantee of approximation. Definition Unlike reductions on decision problems, a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


PTAS Reduction
In computational complexity theory, a PTAS reduction is an approximation-preserving reduction that is often used to perform reductions between solutions to optimization problems. It preserves the property that a problem has a polynomial time approximation scheme (PTAS) and is used to define completeness for certain classes of optimization problems such as APX. Notationally, if there is a PTAS reduction from a problem A to a problem B, we write \text \leq_ \text. With ordinary polynomial-time many-one reductions, if we can describe a reduction from a problem A to a problem B, then any polynomial-time solution for B can be composed with that reduction to obtain a polynomial-time solution for the problem A. Similarly, our goal in defining PTAS reductions is so that given a PTAS reduction from an optimization problem A to a problem B, a PTAS for B can be composed with the reduction to obtain a PTAS for the problem A. Definition Formally, we define a PTAS reduction from A to B using ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Approximation Algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Complete (complexity)
In computational complexity theory, a computational problem is complete for a complexity class if it is, in a technical sense, among the "hardest" (or "most expressive") problems in the complexity class. More formally, a problem ''p'' is called hard for a complexity class ''C'' under a given type of reduction if there exists a reduction (of the given type) from any problem in ''C'' to ''p''. If a problem is both hard for the class and a member of the class, it is complete for that class (for that type of reduction). A problem that is complete for a class ''C'' is said to be C-complete, and the class of all problems complete for ''C'' is denoted C-complete. The first complete class to be defined and the most well known is NP-complete, a class that contains many difficult-to-solve problems that arise in practice. Similarly, a problem hard for a class ''C'' is called C-hard, e.g. NP-hard. Normally, it is assumed that the reduction in question does not have higher computational com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


MAX-3SAT
MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as: ''Given a 3-CNF formula Φ (i.e. with at most 3 variables per clause), find an assignment that satisfies the largest number of clauses.'' MAX-3SAT is a canonical complete problem for the complexity class MAXSNP (shown complete in Papadimitriou pg. 314). Approximability The decision version of MAX-3SAT is NP-complete. Therefore, a polynomial-time solution can only be achieved if P = NP. An approximation within a factor of 2 can be achieved with this simple algorithm, however: * Output the solution in which most clauses are satisfied, when either all variables = TRUE or all variables = FALSE. * Every clause is satisfied by one of the two solutions, therefore one solution satisfies at least half of the clauses. The Karloff-Zwick algorithm runs in polynomial ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


L-reduction
In computer science, particularly the study of approximation algorithms, an L-reduction ("''linear reduction''") is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of Polynomial-time reduction, polynomial reductions in the studies of Computational complexity theory, computational complexity of decision problems. The term ''L reduction'' is sometimes used to refer to log-space reductions, by analogy with the complexity class L (complexity), L, but this is a different concept. Definition Let A and B be optimization problem In mathematics, engineering, computer science and economics Economics () is a behavioral science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goo ...s and cA and cB their ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Conjunctive Normal Form
In Boolean algebra, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. In automated theorem proving, the notion "''clausal normal form''" is often used in a narrower sense, meaning a particular representation of a CNF formula as a set of sets of literals. Definition A logical formula is considered to be in CNF if it is a conjunction of one or more disjunctions of one or more literals. As in disjunctive normal form (DNF), the only propositional operators in CNF are or (\vee), and (\and), and not (\neg). The ''not'' operator can only be used as part of a literal, which means that it can only precede a propositional variable. The following is a context-free grammar for CNF: : ''CNF'' \, \to \, ''Disjunct'' \, \mid \, ''Disjunct'' \, \land \, ''CNF'' : ''Disjunct'' \, \to \, ''Literal'' \, \mid\, ''Literal'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graph Coloring
In graph theory, graph coloring is a methodic assignment of labels traditionally called "colors" to elements of a Graph (discrete mathematics), graph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case of graph labeling. In its simplest form, it is a way of coloring the Vertex (graph theory), vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an ''edge coloring'' assigns a color to each Edge (graph theory), edges so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each Face (graph theory), face (or region) so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complexity Class
In computational complexity theory, a complexity class is a set (mathematics), set of computational problems "of related resource-based computational complexity, complexity". The two most commonly analyzed resources are time complexity, time and space complexity, memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time complexity, time or space complexity, memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P (complexity), P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. Counting problem (complexity), counting problems and function problems) and using other models of computation (e.g. probabil ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]