HOME
*





Alternating Turing Machine
In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP (complexity), NP and co-NP. The concept of an ATM was set forth by Ashok K. Chandra, Chandra and Larry Stockmeyer, Stockmeyer and independently by Dexter Kozen, Kozen in 1976, with a joint journal publication in 1981. Definitions Informal description The definition of NP uses the ''existential mode'' of computation: if ''any'' choice leads to an accepting state, then the whole computation accepts. The definition of co-NP uses the ''universal mode'' of computation: only if ''all'' choices lead to an accepting state does the whole computation accept. An alternating Turing machine (or to be more precise, the definition of acceptance for such a machine) alternates between these modes. An alternating Turing machine is a non-deterministic Turing ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


LH (complexity)
In computational complexity, the logarithmic time hierarchy (LH) is the complexity class of all computational problems solvable in a logarithmic amount of computation time on an alternating Turing machine with a bounded number of alternations. It is a particular case of a bounded alternating Turing machine hierarchy. It is equal to FO and to FO-uniform AC0. The ith level of the logarithmic time hierarchy is the set of languages recognised by alternating Turing machines in logarithmic time with random access Random access (more precisely and more generally called direct access) is the ability to access an arbitrary element of a sequence in equal time or any datum from a population of addressable elements roughly as easily and efficiently as any othe ... and i-1 alternations, beginning with an existential state. LH is the union of all levels. References Complexity classes {{Comp-sci-theory-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME(''n''O(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or " tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb. Definition A language ''L'' is in P if and only if there exists a deterministic Turing machine ''M'', such that * ''M'' runs for polynomial time on all inputs * For all ''x'' in ''L'', ''M'' outputs 1 * For all ''x'' not in ''L'', ''M'' outputs 0 P can also be viewed as a uniform family of boolean circuits. A language ''L'' is in P if and only if there exists a polynomial-time uniform family of boolean circuits \, such that * For all n \in \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial Hierarchy
In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH. Classes within the hierarchy have complete problems (with respect to polynomial-time reductions) which ask if quantified Boolean formulae hold, for formulae with restrictions on the quantifier order. It is known that equality between classes on the same level or consecutive levels in the hierarchy would imply a "collapse" of the hierarchy to that level. Definitions There are multiple equivalent definitions of the classes of the polynomial hierarchy. Oracle definition For the oracle ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Springer-Verlag
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing. Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City. History Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second largest academic publisher with 65 staff in 1872.Chronology
". Springer Science+Business Media.
In 1964, Springer expanded its business international ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Space Constructible
In complexity theory, a time-constructible function is a function ''f'' from natural numbers to natural numbers with the property that ''f''(''n'') can be constructed from ''n'' by a Turing machine in the time of order ''f''(''n''). The purpose of such a definition is to exclude functions that do not provide an upper bound on the runtime of some Turing machine. Time-constructible definitions There are two different definitions of a time-constructible function. In the first definition, a function ''f'' is called time-constructible if there exists a positive integer ''n''0 and Turing machine ''M'' which, given a string 1''n'' consisting of ''n'' ones, stops after exactly ''f''(''n'') steps for all ''n'' ≥ ''n''0. In the second definition, a function ''f'' is called time-constructible if there exists a Turing machine ''M'' which, given a string 1''n'', outputs the binary representation of ''f''(''n'') in '' O''(''f''(''n'')) time (a unary representation may be used instead, since the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


SIAM Journal On Computing
The ''SIAM Journal on Computing'' is a scientific journal focusing on the mathematical and formal aspects of computer science. It is published by the Society for Industrial and Applied Mathematics (SIAM). Although its official ISO abbreviation is ''SIAM J. Comput.'', its publisher and contributors frequently use the shorter abbreviation ''SICOMP''. SICOMP typically hosts the special issues of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) and the Annual ACM Symposium on Theory of Computing (STOC), where about 15% of papers published in FOCS and STOC each year are invited to these special issues. For example, Volume 48 contains 11 out of 85 papers published in FOCS 2016. References * External linksSIAM Journal on Computing
on DBLP
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Immerman–Szelepcsényi Theorem
In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(''s''(''n'')) = co-NSPACE(''s''(''n'')) for any function ''s''(''n'') ≥ log ''n''. The result is equivalently stated as NL = co-NL; although this is the special case when ''s''(''n'') = log ''n'', it implies the general theorem by a standard padding argument. The result solved the second LBA problem. In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve its complement problem (with the ''yes'' and ''no'' answers reversed) in the same asymptotic amount of space. No similar result is known for the time complexity classes, and indeed it is conjectured that NP is not equal t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Boolean Function
In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set (usually , or ). Alternative names are switching function, used especially in older computer science literature, and truth function (or logical function), used in logic. Boolean functions are the subject of Boolean algebra and switching theory. A Boolean function takes the form f:\^k \to \, where \ is known as the Boolean domain and k is a non-negative integer called the arity of the function. In the case where k=0, the function is a constant element of \. A Boolean function with multiple outputs, f:\^k \to \^m with m>1 is a ''vectorial'' or ''vector-valued'' Boolean function (an S-box in symmetric cryptography). There are 2^ different Boolean functions with k arguments; equal to the number of different truth tables with 2^k entries. Every k-ary Boolean function can be expressed as a propositional formula in k variables x_1,...,x_k, and two propositional formul ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Circuit Minimization Problem
Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design. Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one. The smaller circuit with the same function is cheaper, takes less space, consumes less power, have shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit. In terms of Boolean algebra, the optimization of a complex boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Parallel Computation Thesis
In computational complexity theory, the parallel computation thesis is a hypothesis which states that the ''time'' used by a (reasonable) parallel machine is polynomially related to the ''space'' used by a sequential machine. The parallel computation thesis was set forth by Chandra and Stockmeyer in 1976. In other words, for a computational model which allows computations to branch and run in parallel without bound, a formal language which is decidable under the model using no more than t(n) steps for inputs of length ''n'' is decidable by a non-branching machine using no more than t(n)^k units of storage for some constant ''k''. Similarly, if a machine in the unbranching model decides a language using no more than s(n) storage, a machine in the parallel model can decide the language in no more than s(n)^k steps for some constant ''k''. The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


EXPSPACE
In computational complexity theory, is the set of all decision problems solvable by a deterministic Turing machine in exponential space, i.e., in O(2^) space, where p(n) is a polynomial function of n. Some authors restrict p(n) to be a linear function, but most authors instead call the resulting class . If we use a nondeterministic machine instead, we get the class , which is equal to by Savitch's theorem. A decision problem is if it is in , and every problem in has a polynomial-time many-one reduction to it. In other words, there is a polynomial-time algorithm that transforms instances of one to instances of the other with the same answer. problems might be thought of as the hardest problems in . is a strict superset of , , and and is believed to be a strict superset of . Formal definition In terms of and , :\mathsf = \bigcup_ \mathsf\left(2^\right) = \bigcup_ \mathsf\left(2^\right) Examples of problems An example of an problem is the problem of recognizing wh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


EXPTIME
In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2''p''(''n'')) time, where ''p''(''n'') is a polynomial function of ''n''. EXPTIME is one intuitive class in an exponential hierarchy of complexity classes with increasingly more complex oracles or quantifier alternations. For example, the class 2-EXPTIME is defined similarly to EXPTIME but with a doubly exponential time bound. This can be generalized to higher and higher time bounds. EXPTIME can also be reformulated as the space class APSPACE, the set of all problems that can be solved by an alternating Turing machine in polynomial space. EXPTIME relates to the other basic time and space complexity classes in the following way: P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ NEXPTIME ⊆ EXPSPACE. Furthemore, by the time hierarchy theorem and the space hierarc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]