HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
, a
decision problem In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding by means of an algorithm whe ...
is P-complete (
complete Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction. The notion of P-complete decision problems is useful in the analysis of: * which problems are difficult to parallelize effectively, * which problems are difficult to solve in limited space. specifically when stronger notions of reducibility than polytime-reducibility are considered. The specific type of reduction used varies and may affect the exact set of problems. Generically, reductions stronger than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions. If we use NC reductions, that is, reductions which can operate in
polylogarithmic time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
on a parallel computer with a polynomial number of processors, then all P-complete problems lie outside NC and so cannot be effectively parallelized, under the unproven assumption that NC ≠ P. If we use the stronger log-space reduction, this remains true, but additionally we learn that all P-complete problems lie outside L under the weaker unproven assumption that L ≠ P. In this latter case the set P-complete may be smaller.


Motivation

The class P, typically taken to consist of all the "tractable" problems for a sequential computer, contains the class NC, which consists of those problems which can be efficiently solved on a parallel computer. This is because parallel computers can be simulated on a sequential machine. It is not known whether NC = P. In other words, it is not known whether there are any tractable problems that are inherently sequential. Just as it is widely suspected that P does not equal NP, so it is widely suspected that NC does not equal P. Similarly, the class L contains all problems that can be solved by a sequential computer in logarithmic space. Such machines run in polynomial time because they can have a polynomial number of configurations. It is suspected that L ≠ P; that is, that some problems that can be solved in polynomial time also require more than logarithmic space. Similarly to the use of
NP-complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
problems to analyze the P = NP question, the P-complete problems, viewed as the "probably not parallelizable" or "probably inherently sequential" problems, serves in a similar manner to study the NC = P question. Finding an efficient way to parallelize the solution to some P-complete problem would show that NC = P. It can also be thought of as the "problems requiring superlogarithmic space"; a log-space solution to a P-complete problem (using the definition based on log-space reductions) would imply L = P. The logic behind this is analogous to the logic that a polynomial-time solution to an NP-complete problem would prove P = NP: if we have a NC reduction from any problem in P to a problem A, and an NC solution for A, then NC = P. Similarly, if we have a log-space reduction from any problem in P to a problem A, and a log-space solution for A, then L = P.


P-complete problems

The most basic P-complete problem under logspace many-one reductions is following: given a
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer alg ...
M, an input for that machine x, and a number ''T'' (written in unary), \langle M,x,T \rangle does that machine halt on that input within the first ''T'' steps? For any x in L in P, output the encoding of the Turing machine which accepts it in polynomial-time, the encoding of x itself, and a number of steps T=p(, x, ) corresponding to the p which is there polynomial-time bound on the operation of the Turing Machine M_L deciding L, \langle M,x,p(, x, ) \rangle. The machine M halts on x within p(, x, ) steps if and only if x is in L. Clearly, if we can parallelize a general simulation of a sequential computer (ie. The Turing machine simulation of a Turing machine), then we will be able to parallelize any program that runs on that computer. If this problem is in NC, then so is every other problem in P. If the number of steps is written in binary, the problem is
EXPTIME-complete In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2''p''(''n'')) time, whe ...
. This problem illustrates a common trick in the theory of P-completeness. We aren't really interested in whether a problem can be solved quickly on a parallel machine. We're just interested in whether a parallel machine solves it ''much more'' quickly than a sequential machine. Therefore, we have to reword the problem so that the sequential version is in P. That is why this problem required ''T'' to be written in unary. If a number ''T'' is written as a binary number (a string of ''n'' ones and zeros, where ''n'' = log ''T''), then the obvious sequential algorithm can take time 2''n''. On the other hand, if ''T'' is written as a unary number (a string of ''n'' ones, where ''n'' = ''T''), then it only takes time ''n''. By writing ''T'' in unary rather than binary, we have reduced the obvious sequential algorithm from exponential time to linear time. That puts the sequential problem in P. Then, it will be in NC if and only if it is parallelizable. Many other problems have been proved to be P-complete, and therefore are widely believed to be inherently sequential. These include the following problems which are P-complete under at least logspace reductions, either as given, or in a decision-problem form: *
Circuit Value Problem In computational complexity theory, a decision problem is P-complete (complete for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction. The notion of P-complete decision problems is use ...
(CVP) – Given a circuit, the inputs to the circuit, and one gate in the circuit, calculate the output of that gate. * Restricted Case of CVP – Like CVP, except each gate has two inputs and two outputs (F and Not F), every other layer is just AND gates, the rest are OR gates (or, equivalently, all gates are NAND gates, or all gates are NOR gates), the inputs of a gate come from the immediately preceding layer *
Linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is ...
– Maximize a linear function subject to linear inequality constraints * Lexicographically First Depth First Search Ordering – Given a graph with fixed ordered adjacency lists, and nodes ''u'' and ''v'', is vertex ''u'' visited before vertex ''v'' in a depth-first search induced by the order of the adjacency lists? * Context Free Grammar Membership – Given a
context-free grammar In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules are of the form :A\ \to\ \alpha with A a ''single'' nonterminal symbol, and \alpha a string of terminals and/or nonterminals (\alpha can be em ...
and a string, can that string be generated by that grammar? * Horn-satisfiability – given a set of Horn clauses, is there a variable assignment which satisfies them? This is P's version of the
boolean satisfiability problem In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisf ...
. * Game of Life – Given an initial configuration of
Conway's Game of Life The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no furthe ...
, a particular cell, and a time ''T'' (in unary), is that cell alive after ''T'' steps? * LZW (algorithm) (1978 paradigm) Data Compression – given strings ''s'' and ''t'', will compressing ''s'' with an LZ78 method add ''t'' to the dictionary? (Note that for
LZ77 LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations includin ...
compression such as
gzip gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free software replacement for the compress program used in early Unix systems, and i ...
, this is much easier, as the problem reduces to "Is ''t'' in ''s''?".) *
Type inference Type inference refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistic ...
for partial types – Given an untyped term from the
lambda calculus Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation th ...
, determine whether this term has a partial type. Most of the languages above are P-complete under even stronger notions of reduction, such as uniform AC^0 many-one reductions, DLOGTIME reductions, or polylogarithmic projections. In order to prove that a given problem in P is P-complete, one typically tries to reduce a known P-complete problem to the given one. In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Ogihara, showed that if there exists a
sparse language In computational complexity theory, a sparse language is a formal language (a set of strings) such that the complexity function, counting the number of strings of length ''n'' in the language, is bounded by a polynomial function of ''n''. They a ...
that is P-complete, then L = P. P-complete problems may be solvable with different
time complexities In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
. For instance, the
Circuit Value Problem In computational complexity theory, a decision problem is P-complete (complete for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction. The notion of P-complete decision problems is use ...
can be solved in
linear time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
by a
topological sort In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge ''uv'' from vertex ''u'' to vertex ''v'', ''u'' comes before ''v'' in the ordering. For ...
. Of course, because the reductions to a P-complete problem may have different time complexities, this fact does not imply that all the problems in P can also be solved in linear time.


Problems not known to be P-complete

Some NP-problems are not known to be either NP-complete or in P. These problems (e.g. factoring,
graph isomorphism In graph theory, an isomorphism of graphs ''G'' and ''H'' is a bijection between the vertex sets of ''G'' and ''H'' : f \colon V(G) \to V(H) such that any two vertices ''u'' and ''v'' of ''G'' are adjacent in ''G'' if and only if f(u) and f(v) ar ...
,
parity games A parity game is played on a colored directed graph, where each node has been colored by a priority – one of (usually) finitely many natural numbers. Two players, 0 and 1, move a (single, shared) token along the edges of the graph. The ow ...
) are suspected to be difficult. Similarly there are problems in P that are not known to be either P-complete or NC, but are thought to be difficult to parallelize. Examples include the decision problem forms of finding the
greatest common divisor In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers ''x'', ''y'', the greatest common divisor of ''x'' and ''y'' is ...
of two numbers, determining what answer the
extended Euclidean algorithm In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's id ...
would return when given two numbers, and computing the
Maximum weight matching In computer science and graph theory, the maximum weight matching problem is the problem of finding, in a weighted graph, a matching in which the sum of weights is maximized. A special case of it is the assignment problem, in which the input is ...
of a graph with large integer weights.


Notes


References

* Greenlaw, Raymond, James Hoover, and Walter Ruzzo. 1995. ''Limits To Parallel computation; P-Completeness Theory''. . — Develops the theory, then catalogs 96 P-Complete problems. * Satoru Miyano, Shuji Shiraishi, and Takayoshi Shoudai. ''A List of P-Complete Problems''. Kyushu University, RIFIS-TR-CS-17. December 1990. {{ComplexityClasses Complexity classes