HOME
*





SC (complexity)
In computational complexity theory, SC (Steve's Class, named after Stephen Cook) is the complexity class of problems solvable by a deterministic Turing machine in polynomial time (class P) and polylogarithmic space (class PolyL) (that is, O((log ''n'')k) space for some constant ''k''). It may also be called DTISP(poly, polylog), where DTISP stands for ''deterministic time and space''. Note that the definition of SC differs from P ∩ PolyL, since for the former, it is required that a single algorithm runs in both polynomial time and polylogarithmic space; while for the latter, two separate algorithms will suffice: one that runs in polynomial time, and another that runs in polylogarithmic space. (It is unknown whether SC and P ∩ PolyL are equivalent). DCFL, the strict subset of context-free languages recognized by deterministic pushdown automata, is contained in SC, as shown by Cook in 1979. It is open if all context-free languages can be recognized in SC, although they are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


St-connectivity
In computer science, st-connectivity or STCON is a decision problem asking, for vertices ''s'' and ''t'' in a directed graph, if ''t'' is reachable from ''s''. Formally, the decision problem is given by :. Complexity On a sequential computer, st-connectivity can easily be solved in linear time by either depth-first search or breadth-first search. The interest in this problem in computational complexity concerns its complexity with respect to more limited forms of computation. For instance, the complexity class of problems that can be solved by a non-deterministic Turing machine using only a logarithmic amount of memory is called NL. The st-connectivity problem can be shown to be in NL, as a non-deterministic Turing machine can guess the next node of the path, while the only information which has to be stored is the total length of the path and which node is currently under consideration. The algorithm terminates if either the target node ''t'' is reached, or the length of th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Noam Nisan
Noam Nisan ( he, נעם ניסן; born June 20, 1961) is an Israeli computer scientist, a professor of computer science at the Hebrew University of Jerusalem. He is known for his research in computational complexity theory and algorithmic game theory. Biography Nisan did his undergraduate studies at the Hebrew University, graduating in 1984. He went to the University of California, Berkeley for graduate school, and received a Ph.D. in 1988 under the supervision of Richard Karp. After postdoctoral studies at the Massachusetts Institute of Technology he joined the Hebrew University faculty in 1990.Curriculum vitae
retrieved 2012-03-01.


Selected publications

Nisan is the author of

[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Probabilistic Turing Machines
In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution. In the case of equal probabilities for the transitions, probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed in the Turing Machine's alphabet (generally, an equal likelihood of writing a "1" or a "0" on to the tape). Another common reformulation is simply a deterministic Turing machine with an added tape full of random bits ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




BPL (complexity)
In computational complexity theory, BPL (Bounded-error Probabilistic Logarithmic-space), sometimes called BPLP (Bounded-error Probabilistic Logarithmic-space Polynomial-time), is the complexity class of problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with two-sided error. It is named in analogy with BPP, which is similar but has no logarithmic space restriction. Error model The probabilistic Turing machines in the definition of BPL may only accept or reject incorrectly less than 1/3 of the time; this is called ''two-sided error''. The constant 1/3 is arbitrary; any ''x'' with 0 ≤ ''x'' < 1/2 would suffice. This error can be made 2−''p''(''x'') times smaller for any polynomial ''p''(''x'') without using more than polynomial time or logarithmic space by running the algorithm repeatedly.


Related classes

Since two-sided error is more general than one-sided error,
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


RL (complexity)
Randomized Logarithmic-space (RL), sometimes called RLP (Randomized Logarithmic-space Polynomial-time), is the complexity class of computational complexity theory problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with one-sided error. It is named in analogy with RP, which is similar but has no logarithmic space restriction. Definition The probabilistic Turing machines in the definition of RL never accept incorrectly but are allowed to reject incorrectly less than 1/3 of the time; this is called ''one-sided error''. The constant 1/3 is arbitrary; any ''x'' with 0 < ''x'' < 1 would suffice. This error can be made 2−''p''(''x'') times smaller for any polynomial ''p''(''x'') without using more than polynomial time or logarithmic space by running the algorithm repeatedly.


Relation to other complexity classes

Sometimes the name RL is reserved for the class of problems solvable by logarithmic-space probabilistic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NL (complexity)
In computational complexity theory, NL (Nondeterministic Logarithmic-space) is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. NL is a generalization of L, the class for logspace problems on a deterministic Turing machine. Since any deterministic Turing machine is also a nondeterministic Turing machine, we have that L is contained in NL. NL can be formally defined in terms of the computational resource nondeterministic space (or NSPACE) as NL = NSPACE(log ''n''). Important results in complexity theory allow us to relate this complexity class with other classes, telling us about the relative power of the resources involved. Results in the field of algorithms, on the other hand, tell us which problems can be solved with this resource. Like much of complexity theory, many important questions about NL are still open (see Unsolved problems in computer science). Occasionally NL ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Savitch's Theorem
In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function f\in\Omega(\log(n)), :\mathsf\left(f\left(n\right)\right) \subseteq \mathsf\left(f\left(n\right)^2\right). In other words, if a nondeterministic Turing machine can solve a problem using f(n) space, a deterministic Turing machine can solve the same problem in the square of that space bound.Arora & Barak (2009) p.86 Although it seems that nondeterminism may produce exponential gains in time (as formalized in the unproven exponential time hypothesis In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential t ...), Savitch's theorem shows that it has a markedly more limited effect on space requ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Deterministic Pushdown Automaton
In automata theory, a deterministic pushdown automaton (DPDA or DPA) is a variation of the pushdown automaton. The class of deterministic pushdown automata accepts the deterministic context-free languages, a proper subset of context-free languages. Machine transitions are based on the current state and input symbol, and also the current topmost symbol of the stack. Symbols lower in the stack are not visible and have no immediate effect. Machine actions include pushing, popping, or replacing the stack top. A deterministic pushdown automaton has at most one legal transition for the same combination of input symbol, state, and top stack symbol. This is where it differs from the nondeterministic pushdown automaton. Formal definition A (not necessarily deterministic) PDA M can be defined as a 7-tuple: :M=(Q\,, \Sigma\,, \Gamma\,, q_0\,, Z_0\,, A\,, \delta\,) where *Q\, is a finite set of states *\Sigma\, is a finite set of input symbols *\Gamma\, is a finite set of stack symbols ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Stephen Cook
Stephen Arthur Cook (born December 14, 1939) is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor at the University of Toronto, Department of Computer Science and Department of Mathematics. Biography Cook received his bachelor's degree in 1961 from the University of Michigan, and his master's degree and PhD from Harvard University, respectively in 1962 and 1966, from the Mathematics Department. He joined the University of California, Berkeley, mathematics department in 1966 as an assistant professor, and stayed there until 1970 when he was denied reappointment. In a speech celebrating the 30th anniversary of the Berkeley electrical engineering and computer sciences department, fellow Turing Award winner and Berkeley professor Richard Karp said that, "It is to our everlasting shame that we were unable to persuade the math department to give him t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Context-free Language
In formal language theory, a context-free language (CFL) is a language generated by a context-free grammar (CFG). Context-free languages have many applications in programming languages, in particular, most arithmetic expressions are generated by context-free grammars. Background Context-free grammar Different context-free grammars can generate the same context-free language. Intrinsic properties of the language can be distinguished from extrinsic properties of a particular grammar by comparing multiple grammars that describe the language. Automata The set of all context-free languages is identical to the set of languages accepted by pushdown automata, which makes these languages amenable to parsing. Further, for a given CFG, there is a direct way to produce a pushdown automaton for the grammar (and thereby the corresponding language), though going the other way (producing a grammar given an automaton) is not as direct. Examples An example context-free language is L = \, the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Deterministic Context-free Language
In formal language theory, deterministic context-free languages (DCFL) are a proper subset of context-free languages. They are the context-free languages that can be accepted by a deterministic pushdown automaton. DCFLs are always unambiguous, meaning that they admit an unambiguous grammar. There are non-deterministic unambiguous CFLs, so DCFLs form a proper subset of unambiguous CFLs. DCFLs are of great practical interest, as they can be parsed in linear time, and various restricted forms of DCFGs admit simple practical parsers. They are thus widely used throughout computer science. Description The notion of the DCFL is closely related to the deterministic pushdown automaton (DPDA). It is where the language power of pushdown automata is reduced to if we make them deterministic; the pushdown automata become unable to choose between different state-transition alternatives and as a consequence cannot recognize all context-free languages. Unambiguous grammars do not always generat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]