Log-space Reduction
   HOME
*





Log-space Reduction
In computational complexity theory, a log-space reduction is a reduction computable by a deterministic Turing machine using logarithmic space. Conceptually, this means it can keep a constant number of pointers into the input, along with a logarithmic number of fixed-size integers.Arora & Barak (2009) p. 88 It is possible that such a machine may not have space to write down its own output, so the only requirement is that any given bit of the output be computable in log-space. Formally, this reduction is executed via a log-space transducer. Such a machine has polynomially-many configurations, so log-space reductions are also polynomial-time reductions. However, log-space reductions are probably weaker than polynomial-time reductions; while any non-empty, non-full language in P is polynomial-time reducible to any other non-empty, non-full language in P, a log-space reduction from an NL-complete language to a language in L, both of which would be languages in P, would imply the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


L (complexity)
In computational complexity theory, L (also known as LSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space., Definition 8.12, p. 295., p. 177. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way. Complete problems and logical characterization Every non-trivial problem in L is complete under log-space reductions, so weaker reductions are required to identify meaningful notions of L-completeness, the most common being first-order reductions. A 2004 result by Omer Reingold shows that USTCON, the problem of whether there exists a path ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Complete (complexity)
In computational complexity theory, a computational problem is complete for a complexity class if it is, in a technical sense, among the "hardest" (or "most expressive") problems in the complexity class. More formally, a problem ''p'' is called hard for a complexity class ''C'' under a given type of reduction if there exists a reduction (of the given type) from any problem in ''C'' to ''p''. If a problem is both hard for the class and a member of the class, it is complete for that class (for that type of reduction). A problem that is complete for a class ''C'' is said to be C-complete, and the class of all problems complete for ''C'' is denoted C-complete. The first complete class to be defined and the most well known is NP-complete, a class that contains many difficult-to-solve problems that arise in practice. Similarly, a problem hard for a class ''C'' is called C-hard, e.g. NP-hard. Normally, it is assumed that the reduction in question does not have higher computational co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NC (complexity)
In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size ''n'' is in NC if there exist constants ''c'' and ''k'' such that it can be solved in time using parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.Arora & Barak (2009) p.120 Just as the class P can be thought of as the tractable problems ( Cobham's thesis), so NC can be thought of as the problems that can be efficiently solved on a parallel computer.Arora & Barak (2009) p.118 NC is a subset of P because polylogarithmic parallel computations can be simulated by polynomial-time sequential ones. It is unknown whether NC = P, but most researchers suspect this to be false, meaning that there are probably some tractable pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




SL (complexity)
In computational complexity theory, SL (Symmetric Logspace or Sym-L) is the complexity class of problems log-space reducible to USTCON (''undirected s-t connectivity''), which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice. USTCON is a special case of STCON (''directed reachability''), the problem of determining whether a directed path between two vertices in a directed graph exists, which is complete for NL. Because USTCON is SL-complete, most advances that impact USTCON have also impacted SL. Thus they ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Turing Reduction
In computability theory, a Turing reduction from a decision problem A to a decision problem B is an oracle machine which decides problem A given an oracle for B (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve A if it had available to it a subroutine for solving ''B''. The concept can be analogously applied to function problems. If a Turing reduction from A to B exists, then every algorithm for B can be used to produce an algorithm for A, by inserting the algorithm for B at each place where the oracle machine computing A queries the oracle for B. However, because the oracle machine may query the oracle a large number of times, the resulting algorithm may require more time asymptotically than either the algorithm for B or the oracle machine computing A. A Turing reduction in which the oracle machine runs in polynomial time is known as a Cook reduction. The first formal definition of relative computability, then called relative reducibility, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Many-one Reduction
In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem L_1 into instances of a second decision problem L_2 where the instance reduced to is in the language L_2 if the initial instance was in its language L_1 and is not in the language L_2 if the initial instance was not in its language L_1. Thus if we can decide whether L_2 instances are in the language L_2, we can decide whether L_1 instances are in its language by applying the reduction and solving L_2. Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that L_1 reduces to L_2 if, in layman's terms L_2 is harder to solve than L_1. That is to say, any algorithm that solves L_2 can also be used as part of a (otherwise relatively simple) program that solves L_1. Many-one reductions are a special case and stronger form of Turing reductions. With many-one red ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-complete
In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm. Polynomial time refers to an amount of time that is considered "quick" for a de ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NL (complexity)
In computational complexity theory, NL (Nondeterministic Logarithmic-space) is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. NL is a generalization of L, the class for logspace problems on a deterministic Turing machine. Since any deterministic Turing machine is also a nondeterministic Turing machine, we have that L is contained in NL. NL can be formally defined in terms of the computational resource nondeterministic space (or NSPACE) as NL = NSPACE(log ''n''). Important results in complexity theory allow us to relate this complexity class with other classes, telling us about the relative power of the resources involved. Results in the field of algorithms, on the other hand, tell us which problems can be solved with this resource. Like much of complexity theory, many important questions about NL are still open (see Unsolved problems in computer science). Occasionally NL ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Reduction (complexity)
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. Intuitively, problem ''A'' is reducible to problem ''B'', if an algorithm for solving problem ''B'' efficiently (if it existed) could also be used as a subroutine to solve problem ''A'' efficiently. When this is true, solving ''A'' cannot be harder than solving ''B''. "Harder" means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). The existence of a reduction from ''A'' to ''B'', can be written in the shorthand notation ''A'' ≤m ''B'', usually with a subscript on the ≤ to indicate the t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME(''n''O(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or " tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb. Definition A language ''L'' is in P if and only if there exists a deterministic Turing machine ''M'', such that * ''M'' runs for polynomial time on all inputs * For all ''x'' in ''L'', ''M'' outputs 1 * For all ''x'' not in ''L'', ''M'' outputs 0 P can also be viewed as a uniform family of boolean circuits. A language ''L'' is in P if and only if there exists a polynomial-time uniform family of boolean circuits \, such that * For all n \in \m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial-time Reduction
In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second. A polynomial-time reduction proves that the first problem is no more difficult than the second one, because whenever an efficient algorithm exists for the second problem, one exists for the first problem as well. By contraposition, if no efficient algorithm exists for the first problem, none exists for the second either. Polynomial-time reductions are frequently used in complexity theory for defining both complexity classes and complete problems ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]