PL (complexity)
   HOME
*





PL (complexity)
PL, or probabilistic L, is the class of languages recognizable by a polynomial time logarithmic space randomized machine with probability >  (this is called unbounded error). Equivalently, as shown below, PL is the class of languages recognized by unbounded time unbounded error logspace randomized machine. An example of PL complete problem (under logspace reduction) is finding whether the determinant of a matrix (with integral coefficients) is positive. Given a matrix and a number , testing with , M, >n is also PL complete. By contrast, testing whether matrix permanent is positive is PP complete. PLPL=PL in the sense that for every f in PL, PL is unchanged if it is extended to allow as a subroutine, where A is the input string. PL contains NL and BPL and is contained in NC2. Approximating determinant in PL Determinant of an integral matrix can be reduced to finding the difference between the number of accepting and rejecting paths on a polynomially sized directed ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


L (complexity)
In computational complexity theory, L (also known as LSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space., Definition 8.12, p. 295., p. 177. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way. Complete problems and logical characterization Every non-trivial problem in L is complete under log-space reductions, so weaker reductions are required to identify meaningful notions of L-completeness, the most common being first-order reductions. A 2004 result by Omer Reingold shows that USTCON, the problem of whether there exists a path ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Permanent (mathematics)
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant. Definition The permanent of an matrix is defined as \operatorname(A)=\sum_\prod_^n a_. The sum here extends over all elements σ of the symmetric group ''S''''n''; i.e. over all permutations of the numbers 1, 2, ..., ''n''. For example, \operatorname\begina&b \\ c&d\end=ad+bc, and \operatorname\begina&b&c \\ d&e&f \\ g&h&i \end=aei + bfg + cdh + ceg + bdi + afh. The definition of the permanent of ''A'' differs from that of the determinant of ''A'' in that the signatures of the permutations are not taken into account. The permanent of a matrix A is denoted per ''A'', perm ''A'', or Per ''A'', sometimes with parentheses around the argument. Minc uses Per(''A'') for the permanent of rectangular mat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

PP (complexity)
In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977. If a decision problem is in PP, then there is an algorithm for it that is allowed to flip coins and make random decisions. It is guaranteed to run in polynomial time. If the answer is YES, the algorithm will answer YES with probability more than 1/2. If the answer is NO, the algorithm will answer YES with probability less than 1/2. In more practical terms, it is the class of problems that can be solved to any fixed degree of accuracy by running a randomized, polynomial-time algorithm a sufficient (but bounded) number of times. Turing machines that are polynomially-bound and probabilistic are characterized as PPT, which stands for probabilistic polynomial-time machines. This characterization ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NL (complexity)
In computational complexity theory, NL (Nondeterministic Logarithmic-space) is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. NL is a generalization of L, the class for logspace problems on a deterministic Turing machine. Since any deterministic Turing machine is also a nondeterministic Turing machine, we have that L is contained in NL. NL can be formally defined in terms of the computational resource nondeterministic space (or NSPACE) as NL = NSPACE(log ''n''). Important results in complexity theory allow us to relate this complexity class with other classes, telling us about the relative power of the resources involved. Results in the field of algorithms, on the other hand, tell us which problems can be solved with this resource. Like much of complexity theory, many important questions about NL are still open (see Unsolved problems in computer science). Occasionally NL ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




BPL (complexity)
In computational complexity theory, BPL (Bounded-error Probabilistic Logarithmic-space), sometimes called BPLP (Bounded-error Probabilistic Logarithmic-space Polynomial-time), is the complexity class of problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with two-sided error. It is named in analogy with BPP, which is similar but has no logarithmic space restriction. Error model The probabilistic Turing machines in the definition of BPL may only accept or reject incorrectly less than 1/3 of the time; this is called ''two-sided error''. The constant 1/3 is arbitrary; any ''x'' with 0 ≤ ''x'' < 1/2 would suffice. This error can be made 2−''p''(''x'') times smaller for any polynomial ''p''(''x'') without using more than polynomial time or logarithmic space by running the algorithm repeatedly.


Related classes

Since two-sided error is more general than one-sided error,
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NC (complexity)
In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size ''n'' is in NC if there exist constants ''c'' and ''k'' such that it can be solved in time using parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.Arora & Barak (2009) p.120 Just as the class P can be thought of as the tractable problems ( Cobham's thesis), so NC can be thought of as the problems that can be efficiently solved on a parallel computer.Arora & Barak (2009) p.118 NC is a subset of P because polylogarithmic parallel computations can be simulated by polynomial-time sequential ones. It is unknown whether NC = P, but most researchers suspect this to be false, meaning that there are probably some tractable pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Chain
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs ''now''." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics. Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability dist ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix is denoted , , or . The determinant of a matrix is :\begin a & b\\c & d \end=ad-bc, and the determinant of a matrix is : \begin a & b & c \\ d & e & f \\ g & h & i \end= aei + bfg + cdh - ceg - bdi - afh. The determinant of a matrix can be defined in several equivalent ways. Leibniz formula expresses the determinant as a sum of signed products of matrix entries such that each summand is the product of different entries, and the number of these summands is n!, the factorial of (t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]