2-EXPTIME
   HOME
*





2-EXPTIME
In computational complexity theory, the complexity class 2-EXPTIME (sometimes called 2-EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(22''p''(''n'')) time, where ''p''(''n'') is a polynomial function of ''n''. In terms of DTIME, : \mathsf = \bigcup_ \mathsf \left( 2^ \right) . We know : P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ NEXPTIME ⊆ EXPSPACE ⊆ 2-EXPTIME ⊆ ELEMENTARY. 2-EXPTIME can also be reformulated as the space class AEXPSPACE, the problems that can be solved by an alternating Turing machine in exponential space. This is one way to see that EXPSPACE ⊆ 2-EXPTIME, since an alternating Turing machine is at least as powerful as a deterministic Turing machine. 2-EXPTIME is one class in a hierarchy of complexity classes with increasingly higher time bounds. The class 3-EXPTIME is defined similarly to 2-EXPTIME but with a triply exponential time bound 2^. This can be generalized to higher and higher time bounds. Examples Exa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Double Exponential Function
A double exponential function is a constant raised to the power of an exponential function. The general formula is f(x) = a^=a^ (where ''a''>1 and ''b''>1), which grows much more quickly than an exponential function. For example, if ''a'' = ''b'' = 10: *''f''(x) = 1010x *''f''(0) = 10 *''f''(1) = 1010 *''f''(2) = 10100 = googol *''f''(3) = 101000 *''f''(100) = 1010100 = googolplex. Factorials grow faster than exponential functions, but much more slowly than doubly exponential functions. However, tetration and the Ackermann function grow faster. See Big O notation for a comparison of the rate of growth of various functions. The inverse of the double exponential function is the double logarithm log(log(''x'')). Doubly exponential sequences A sequence of positive integers (or real numbers) is said to have ''doubly exponential rate of growth'' if the function giving the th term of the sequence is bounded above and below by doubly exponential functions of . Examples include * The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


EXPTIME
In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2''p''(''n'')) time, where ''p''(''n'') is a polynomial function of ''n''. EXPTIME is one intuitive class in an exponential hierarchy of complexity classes with increasingly more complex oracles or quantifier alternations. For example, the class 2-EXPTIME is defined similarly to EXPTIME but with a doubly exponential time bound. This can be generalized to higher and higher time bounds. EXPTIME can also be reformulated as the space class APSPACE, the set of all problems that can be solved by an alternating Turing machine in polynomial space. EXPTIME relates to the other basic time and space complexity classes in the following way: P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ NEXPTIME ⊆ EXPSPACE. Furthemore, by the time hierarchy theorem and the space hierarchy the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Presburger Arithmetic
Presburger arithmetic is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who introduced it in 1929. The signature of Presburger arithmetic contains only the addition operation and equality, omitting the multiplication operation entirely. The axioms include a schema of induction. Presburger arithmetic is much weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Presburger arithmetic is a decidable theory. This means it is possible to algorithmically determine, for any sentence in the language of Presburger arithmetic, whether that sentence is provable from the axioms of Presburger arithmetic. The asymptotic running-time computational complexity of this algorithm is at least doubly exponential, however, as shown by . Overview The language of Presburger arithmetic contains constants 0 and 1 and a binary function +, interpreted as addition. In this language, the axioms ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Partially Observable System
A partially observable system is one in which the entire state of the system is not fully visible to an external sensor. In a partially observable system the observer may utilise a memory system in order to add information to the observer's understanding of the system.Peter Norvig, Sebastian Thrun. UdacityIntroduction to Artificial Intelligence/ref> An example of a partially observable system would be a card game in which some of the cards are discarded into a pile face down. In this case the observer is only able to view their own cards and potentially those of the dealer. They are not able to view the face-down (used) cards, nor the cards that will be dealt at some stage in the future. A memory system can be used to remember the previously dealt cards that are now on the used pile. This adds to the total sum of knowledge that the observer can use to make decisions. In contrast, a fully observable system would be that of chess. In chess (apart from the 'who is moving next' st ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Winning Strategy
Determinacy is a subfield of set theory, a branch of mathematics, that examines the conditions under which one or the other player of a game has a winning strategy, and the consequences of the existence of such strategies. Alternatively and similarly, "determinacy" is the property of a game whereby such a strategy exists. Determinacy was introduced by Gale and Stewart in 1950, under the name "determinateness". The games studied in set theory are usually Gale–Stewart games—two-player games of perfect information in which the players make an infinite sequence of moves and there are no draws. The field of game theory studies more general kinds of games, including games with draws such as tic-tac-toe, chess, or infinite chess, or games with imperfect information such as poker. Basic notions Games The first sort of game we shall consider is the two-player game of perfect information of length ω, in which the players play natural numbers. These games are often called Ga ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Regular Expression
A regular expression (shortened as regex or regexp; sometimes referred to as rational expression) is a sequence of characters that specifies a search pattern in text. Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings, or for input validation. Regular expression techniques are developed in theoretical computer science and formal language theory. The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized the concept of a regular language. They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax. Regular expressions are used in search engines, in search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK, and in lexical analysis. Most gener ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complement (set Theory)
In set theory, the complement of a set , often denoted by (or ), is the set of elements not in . When all sets in the universe, i.e. all sets under consideration, are considered to be members of a given set , the absolute complement of is the set of elements in that are not in . The relative complement of with respect to a set , also termed the set difference of and , written B \setminus A, is the set of elements in that are not in . Absolute complement Definition If is a set, then the absolute complement of (or simply the complement of ) is the set of elements not in (within a larger set that is implicitly defined). In other words, let be a set that contains all the elements under study; if there is no need to mention , either because it has been previously specified, or it is obvious and unique, then the absolute complement of is the relative complement of in : A^\complement = U \setminus A. Or formally: A^\complement = \. The absolute complement of is u ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cylindrical Algebraic Decomposition
In mathematics, cylindrical algebraic decomposition (CAD) is a notion, and an algorithm to compute it, that are fundamental for computer algebra and real algebraic geometry. Given a set ''S'' of polynomials in R''n'', a cylindrical algebraic decomposition is a decomposition of R''n'' into connected semialgebraic sets called ''cells'', on which each polynomial has constant sign, either +, − or 0. To be ''cylindrical'', this decomposition must satisfy the following condition: If 1 ≤ ''k'' < ''n'' and ''π'' is the projection from R''n'' onto R''n''−''k'' consisting in removing the last ''k'' coordinates, then for every pair of cells ''c'' and ''d'', one has either ''π''(''c'') = ''π''(''d'') or ''π''(''c'') ∩ ''π''(''d'') = ∅. This implies that the images by ''π'' of the cells define a cylindrical decomposition of R''n''−''k''. The notion was introduced by

Real Closed Field
In mathematics, a real closed field is a field ''F'' that has the same first-order properties as the field of real numbers. Some examples are the field of real numbers, the field of real algebraic numbers, and the field of hyperreal numbers. Definitions A real closed field is a field ''F'' in which any of the following equivalent conditions is true: #''F'' is elementarily equivalent to the real numbers. In other words, it has the same first-order properties as the reals: any sentence in the first-order language of fields is true in ''F'' if and only if it is true in the reals. #There is a total order on ''F'' making it an ordered field such that, in this ordering, every positive element of ''F'' has a square root in ''F'' and any polynomial of odd degree with coefficients in ''F'' has at least one root in ''F''. #''F'' is a formally real field such that every polynomial of odd degree with coefficients in ''F'' has at least one root in ''F'', and for every element ''a'' of ''F'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Quantifier Elimination
Quantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science. Informally, a quantified statement "\exists x such that \ldots" can be viewed as a question "When is there an x such that \ldots?", and the statement without quantifiers can be viewed as the answer to that question. One way of classifying formulas is by the amount of quantification. Formulas with less depth of quantifier alternation are thought of as being simpler, with the quantifier-free formulas as the simplest. A theory has quantifier elimination if for every formula \alpha, there exists another formula \alpha_ without quantifiers that is equivalent to it ( modulo this theory). Examples An example from high school mathematics says that a single-variable quadratic polynomial has a real root if and only if its discriminant is non-negative: :: \exists x\in\mathbb. (a\neq 0 \wedge ax^2+bx+c=0)\ \ \Longleftrightarrow\ \ a\neq 0 \wedge b^2-4ac\geq 0 He ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computation Tree Logic
Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied (e.g., all program variables are positive or no cars on a highway straddle two lanes), then all possible executions of a program avoid some undesirable condition (e.g., dividing a number by zero or two cars colliding on a highway). In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Compu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Worst-case Complexity
In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as in asymptotic notation). It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given ''any'' input of size , and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms. The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input. Definition Given a model of computation and an algorithm \mathsf that halts on each input s, the mapping t_ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]