HOME





LH (complexity)
In computational complexity, the logarithmic time hierarchy (LH) is the complexity class of all computational problems solvable in a logarithmic amount of computation time on an alternating Turing machine with a bounded number of alternations. It is a particular case of a bounded alternating Turing machine hierarchy. It is equal to FO and to FO-uniform AC0. The ith level of the logarithmic time hierarchy is the set of languages recognised by alternating Turing machines in logarithmic time with random access Random access (also called direct access) is the ability to access an arbitrary element of a sequence in equal time or any datum from a population of addressable elements roughly as easily and efficiently as any other, no matter how many elemen ... and i-1 alternations, beginning with an existential state. LH is the union of all levels. References Complexity classes {{Comp-sci-theory-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Complexity Class
In computational complexity theory, a complexity class is a set (mathematics), set of computational problems "of related resource-based computational complexity, complexity". The two most commonly analyzed resources are time complexity, time and space complexity, memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time complexity, time or space complexity, memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P (complexity), P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. Counting problem (complexity), counting problems and function problems) and using other models of computation (e.g. probabil ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Computational Problem
In theoretical computer science, a computational problem is one that asks for a solution in terms of an algorithm. For example, the problem of factoring :"Given a positive integer ''n'', find a nontrivial prime factor of ''n''." is a computational problem that has a solution, as there are many known integer factorization algorithms. A computational problem can be viewed as a set of ''instances'' or ''cases'' together with a, possibly empty, set of ''solutions'' for every instance/case. The question then is, whether there exists an algorithm that maps instances to solutions. For example, in the factoring problem, the instances are the integers ''n'', and solutions are prime numbers ''p'' that are the nontrivial prime factors of ''n''. An example of a computational problem without a solution is the Halting problem. Computational problems are one of the main objects of study in theoretical computer science. One is often interested not only in mere existence of an algorithm, b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Logarithmic Growth
In mathematics, logarithmic growth describes a phenomenon whose size or cost can be described as a logarithm function of some input. e.g. ''y'' = ''C'' log (''x''). Any logarithm base can be used, since one can be converted to another by multiplying by a fixed constant.. Logarithmic growth is the inverse of exponential growth and is very slow. A familiar example of logarithmic growth is a number, ''N'', in positional notation, which grows as log''b'' (''N''), where ''b'' is the base of the number system used, e.g. 10 for decimal arithmetic. In more advanced mathematics, the partial sums of the harmonic series :1+\frac+\frac+\frac+\frac+\cdots grow logarithmically. In the design of computer algorithms, logarithmic growth, and related variants, such as log-linear, or linearithmic, growth are very desirable indications of efficiency, and occur in the time complexity analysis of algorithms such as binary search. Logarithmic growth can lead to apparent paradoxes ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Computation Time
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is genera ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]




Alternating Turing Machine
In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981. Definitions Informal description The definition of NP uses the ''existential mode'' of computation: if ''any'' choice leads to an accepting state, then the whole computation accepts. The definition of co-NP uses the ''universal mode'' of computation: only if ''all'' choices lead to an accepting state does the whole computation accept. An alternating Turing machine (or to be more precise, the definition of acceptance for such a machine) alternates between these modes. An alternating Turing machine is a non-deterministic Turing machine whose states are divided into two sets: existential state ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


FO (complexity)
Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them. Specifically, each logical system produces a set of queries expressible in it. The queries – when restricted to finite structures – correspond to the computational problems of traditional complexity theory. The first main result of descriptive complexity was Fagin's theorem, shown by Ronald Fagin i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Neil Immerman
Neil Immerman (born 24 November 1953, Manhasset, New York) is an American theoretical computer science, theoretical computer scientist, a professor of computer science at the University of Massachusetts Amherst.Faculty directory: Neil Immerman
Computer Science Department, University of Massachusetts Amherst, retrieved 2010-01-23.
He is one of the key developers of descriptive complexity, an approach he is currently applying to research in model checking, database theory, and computational complexity theory. Professor Immerman is an editor of the ''SIAM Journal on Computing'' and of ''Logical Methods in Computer Science''. He received B.S. and M.S. degrees from Yale University in 1974 and his Ph.D. from Cornell University in 1980 under the supervision of Juris Hartmanis, a Turing Award winner at Cornell.
[...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Random-access Turing Machine
In computational complexity, a field of theoretical computer science, random-access Turing machines extend the functionality of conventional Turing machines by introducing the capability for random access to memory positions. The inherent ability of RATMs to access any memory cell in a constant amount of time significantly decreases the computation time required for problems where data size and access speed are critical factors. As conventional Turing machines can only access data sequentially, the capabilities of RATMs are more closely with the memory access patterns of modern computing systems and provide a more realistic framework for analyzing algorithms that handle the complexities of large-scale data. Definition The random-access Turing machine is characterized chiefly by its capacity for direct memory access: on a random-access Turing machine, there is a special '' pointer'' tape of logarithmic space accepting a binary vocabulary. The Turing machine has a special state suc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Existential State
Existentialism is a family of philosophical views and inquiry that explore the human individual's struggle to lead an authentic life despite the apparent absurdity or incomprehensibility of existence. In examining meaning, purpose, and value, existentialist thought often includes concepts such as existential crises, angst, courage, and freedom. Existentialism is associated with several 19th- and 20th-century European philosophers who shared an emphasis on the human subject, despite often profound differences in thought. Among the 19th-century figures now associated with existentialism are philosophers Søren Kierkegaard and Friedrich Nietzsche, as well as novelist Fyodor Dostoevsky, all of whom critiqued rationalism and concerned themselves with the problem of meaning. The word ''existentialism'', however, was not coined until the mid 20th century, during which it became most associated with contemporaneous philosophers Jean-Paul Sartre, Martin Heidegger, Simone de Beauvoir, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]