NP-hardness
   HOME
*



picture info

NP-hardness
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem. A more precise specification is: a problem ''H'' is NP-hard when every problem ''L'' in NP can be reduced in polynomial time to ''H''; that is, assuming a solution for ''H'' takes 1 unit time, ''H''s solution can be used to solve ''L'' in polynomial time. As a consequence, finding a polynomial time algorithm to solve any NP-hard problem would give polynomial time algorithms for all the problems in NP. As it is suspected that P≠NP, it is unlikely that such an algorithm exists. It is suspected that there are no polynomial-time algorithms for NP-hard problems, but that has not been proven. Moreover, the class P, in which all problems can be solved in polynomial time, is contained in the NP class. Def ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Travelling Salesman Problem
The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP. In the theory of computational complexity, the decision version of the TSP (where given a length ''L'', the task is to decide whether the graph has a tour of at most ''L'') belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




NP-intermediate
In computational complexity, problems that are in the complexity class NP but are neither in the class P nor NP-complete are called NP-intermediate, and the class of such problems is called NPI. Ladner's theorem, shown in 1975 by Richard E. Ladner, is a result asserting that, if P ≠ NP, then NPI is not empty; that is, NP contains problems that are neither in P nor NP-complete. Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty. Under the assumption that P ≠ NP, Ladner explicitly constructs a problem in NPI, although this problem is artificial and otherwise uninteresting. It is an open question whether any "natural" problem has the same property: Schaefer's dichotomy theorem provides conditions under which classes of constrained Boolean satisfiability problems cannot be in NPI. Some problems that are considered good candidates for being NP-intermediate are the graph isomorphism problem, factoring, and comput ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximation Algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there ar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Halting Problem
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program–input pairs cannot exist. For any program that might determine whether programs halt, a "pathological" program , called with some input, can pass its own source and its input to ''f'' and then specifically do the opposite of what ''f'' predicts ''g'' will do. No ''f'' can exist that handles this case. A key part of the proof is a mathematical definition of a computer and program, which is known as a Turing machine; the halting problem is '' undecidable'' over Turing machines. It is one of the first cases of decision problems proven to be unsolvable. This proof is significant to practical computing efforts, defining a class of applications which no programming invent ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Configuration Management
Configuration management (CM) is a process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. The CM process is widely used by military engineering organizations to manage changes throughout the system lifecycle of complex systems, such as weapon systems, military vehicles, and information systems. Outside the military, the CM process is also used with IT service management as defined by ITIL, and with other domain models in the civil engineering and other industrial engineering segments such as roads, bridges, canals, dams, and buildings. Introduction CM applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes. CM verifies that a system performs as intended, and is identified and documented in sufficient detail to support its projected life cycle. The CM process facili ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximate Computing
Approximate computing is an emerging paradigm for energy-efficient and/or high-performance design. It includes a plethora of computation techniques that return a possibly inaccurate result rather than a guaranteed accurate result, and that can be used for applications where an approximate result is sufficient for its purpose. One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans. Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy. For example, in ''k''-means clustering algorithm, allowing only 5% loss in classification ac ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




NP-equivalent
In computational complexity theory, the complexity class NP-equivalent is the set of function problems that are both NP-easy and NP-hard., p. 117, 120. NP-equivalent is the analogue of NP-complete for function problems. For example, the problem FIND-SUBSET-SUM is in NP-equivalent. Given a set of integers, FIND-SUBSET-SUM is the problem of finding some nonempty subset of the integers that adds up to zero (or returning the empty set if there is no such subset). This optimization problem is similar to the decision problem SUBSET-SUM. Given a set of integers, SUBSET-SUM is the problem of finding whether there exists a subset summing to zero. SUBSET-SUM is NP-complete. To show that FIND-SUBSET-SUM is NP-equivalent, we must show that it is both NP-hard and NP-easy. Clearly it is NP-hard. If we had a black box that solved FIND-SUBSET-SUM in unit time, then it would be easy to solve SUBSET-SUM. Simply ask the black box to find the subset that sums to zero, then check whether it re ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NP-easy
In complexity theory, the complexity class NP-easy is the set of function problems that are solvable in polynomial time by a deterministic Turing machine with an oracle for some decision problem in NP. In other words, a problem X is NP-easy if and only if there exists some problem Y in NP such that X is polynomial-time Turing reducible to Y., p. 117, 120. This means that given an oracle for Y, there exists an algorithm that solves X in polynomial time (possibly by repeatedly using that oracle). NP-easy is another name for FPNP (see the function problem article) or for FΔ2P (see the polynomial hierarchy article). An example of an NP-easy problem is the problem of sorting a list of strings. The decision problem "is string A greater than string B" is in NP. There are algorithms such as quicksort that can sort the list using only a polynomial number of calls to the comparison routine, plus a polynomial amount of additional work. Therefore, sorting is NP-easy. There are also m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of compu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


PSPACE-complete
In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length (polynomial space) and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE. Problems known to be PSPACE-complete include determining properties of regular expressions and context-sensitive grammars, determining the truth of quantified Boolean formulas, step-by-step changes between solutions of combinatorial optimization problems, and many puzzles and games. Theory A problem is defined to be PSPACE-complete if it can be solved using a polynomial amount of memory (it belongs to PSPACE) and every problem in PSPACE can be tr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


PSPACE
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space. Formal definition If we denote by SPACE(''t''(''n'')), the set of all problems that can be solved by Turing machines using ''O''(''t''(''n'')) space for some function ''t'' of the input size ''n'', then we can define PSPACE formally asArora & Barak (2009) p.81 :\mathsf = \bigcup_ \mathsf(n^k). PSPACE is a strict superset of the set of context-sensitive languages. It turns out that allowing the Turing machine to be nondeterministic does not add any extra power. Because of Savitch's theorem,Arora & Barak (2009) p.85 NPSPACE is equivalent to PSPACE, essentially because a deterministic Turing machine can simulate a non-deterministic Turing machine without needing much more space (even though it may use much more time).Arora & Barak (2009) p.86 Also, the complements of all problems in PSPACE are also in PSPACE, meaning t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




True Quantified Boolean Formula
In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified SAT). Overview In computational complexity theory, the quantified Boolean formula problem (QBF) is a generalization of the Boolean satisfiability problem in which both existential quantifiers and universal quantifiers can be applied to each variable. Put another way, it asks whether a quantified sentential form over a set of Boolean variables is true or false. For example, the following is an instance of QBF: : \forall x\ \exists y\ \e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]