HOME
*





Presburger Arithmetic
Presburger arithmetic is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who introduced it in 1929. The signature of Presburger arithmetic contains only the addition operation and equality, omitting the multiplication operation entirely. The axioms include a schema of induction. Presburger arithmetic is much weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Presburger arithmetic is a decidable theory. This means it is possible to algorithmically determine, for any sentence in the language of Presburger arithmetic, whether that sentence is provable from the axioms of Presburger arithmetic. The asymptotic running-time computational complexity of this algorithm is at least doubly exponential, however, as shown by . Overview The language of Presburger arithmetic contains constants 0 and 1 and a binary function +, interpreted as addition. In this language, the axioms ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

First-order Predicate Calculus
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists''"'' is a quantifier, while ''x'' is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic. A theory about a topic is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Interpretation (logic)
An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics. The most commonly studied formal logics are propositional logic, predicate logic and their modal analogs, and for these there are standard ways of presenting an interpretation. In these contexts an interpretation is a function that provides the extension of symbols and strings of symbols of an object language. For example, an interpretation function could take the predicate ''T'' (for "tall") and assign it the extension (for "Abraham Lincoln"). Note that all our interpretation does is assign the extension to the non-logical constant ''T'', and does not make a claim about whether ''T'' is to stand for tall and 'a' f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Alternating Turing Machine
In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981. Definitions Informal description The definition of NP uses the ''existential mode'' of computation: if ''any'' choice leads to an accepting state, then the whole computation accepts. The definition of co-NP uses the ''universal mode'' of computation: only if ''all'' choices lead to an accepting state does the whole computation accept. An alternating Turing machine (or to be more precise, the definition of acceptance for such a machine) alternates between these modes. An alternating Turing machine is a non-deterministic Turing machine whose states are divided into two sets: existential states ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Double Exponential Function
A double exponential function is a constant raised to the power of an exponential function. The general formula is f(x) = a^=a^ (where ''a''>1 and ''b''>1), which grows much more quickly than an exponential function. For example, if ''a'' = ''b'' = 10: *''f''(x) = 1010x *''f''(0) = 10 *''f''(1) = 1010 *''f''(2) = 10100 = googol *''f''(3) = 101000 *''f''(100) = 1010100 = googolplex. Factorials grow faster than exponential functions, but much more slowly than doubly exponential functions. However, tetration and the Ackermann function grow faster. See Big O notation for a comparison of the rate of growth of various functions. The inverse of the double exponential function is the double logarithm log(log(''x'')). Doubly exponential sequences A sequence of positive integers (or real numbers) is said to have ''doubly exponential rate of growth'' if the function giving the th term of the sequence is bounded above and below by doubly exponential functions of . Examples include * The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Computation
Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as ''computers''. An especially well-known discipline of the study of computation is computer science. Physical process of Computation Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Examples of such physical systems are digital computers, mechanical computers, quantum computers, DNA computers, molecular computers, microfluidics-based computers, analog computers, and wetware computers. This point of view has been adopted by the physics of computation, a branch of theoretical physics, as well as the field of natural computing. An even more radical point of view, pancomputationalism (inaudible word), is the postulate of digital physics that argues that the evolution of the universe is itself ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gentzen's Consistency Proof
Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. Gentzen's theorem Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Entscheidungsproblem
In mathematics and computer science, the ' (, ) is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. The problem asks for an algorithm that considers, as input, a statement and answers "Yes" or "No" according to whether the statement is ''universally valid'', i.e., valid in every structure satisfying the axioms. Completeness theorem By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so the ' can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic. In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the ' is impossible, assuming that the intuitive notion of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Quantifier Elimination
Quantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science. Informally, a quantified statement "\exists x such that \ldots" can be viewed as a question "When is there an x such that \ldots?", and the statement without quantifiers can be viewed as the answer to that question. One way of classifying formulas is by the amount of quantification. Formulas with less depth of quantifier alternation are thought of as being simpler, with the quantifier-free formulas as the simplest. A theory has quantifier elimination if for every formula \alpha, there exists another formula \alpha_ without quantifiers that is equivalent to it ( modulo this theory). Examples An example from high school mathematics says that a single-variable quadratic polynomial has a real root if and only if its discriminant is non-negative: :: \exists x\in\mathbb. (a\neq 0 \wedge ax^2+bx+c=0)\ \ \Longleftrightarrow\ \ a\neq 0 \wedge b^2-4ac\geq 0 He ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can perform automated deductions (referred to as automated reasoning) and use mathematical and logical tests to divert the code execution through various routes (referred to as automated decision-making). Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus". In contrast, a Heuristic (computer science), heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result. As an effective method, an algorithm ca ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Completeness (logic)
In mathematical logic and metalogic, a formal system is called complete with respect to a particular property if every formula having the property can be derived using that system, i.e. is one of its theorems; otherwise the system is said to be incomplete. The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semantical validity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true. Other properties related to completeness The property converse to completeness is called soundness: a system is sound with respect to a property (mostly semantical validity) if each of its theorems has that property. Forms of completeness Expressive completeness A formal language is expressively complete if it can express the subject matter for which it is intended. Functional completeness A set of logical connectives associated with a formal system ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]