Counting Complexity Class
   HOME
*





Counting Complexity Class
In computational complexity theory and computability theory, a counting problem is a type of computational problem. If ''R'' is a search problem then :c_R(x)=\vert\\vert \, is the corresponding counting function and :\#R=\ denotes the corresponding decision problem. Note that ''cR'' is a search problem while #''R'' is a decision problem, however ''cR'' can be ''C'' Cook-reduced to #''R'' (for appropriate ''C'') using a binary search (the reason #''R'' is defined the way it is, rather than being the graph of ''cR'', is to make this binary search possible). Counting complexity class If ''NX'' is a complexity class associated with non-deterministic machines then ''#X'' = is the set of counting problems associated with each search problem in ''NX''. In particular, #P is the class of counting problems associated with NP search problems. Just as NP has NP-complete problems via many-one reductions, #P has complete problems via parsimonious reductions, problem transformations t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computability Theory
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory. Basic questions addressed by computability theory include: * What does it mean for a function on the natural numbers to be computable? * How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. I ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computational Problem
In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring :"Given a positive integer ''n'', find a nontrivial prime factor of ''n''." is a computational problem. A computational problem can be viewed as a set of ''instances'' or ''cases'' together with a, possibly empty, set of ''solutions'' for every instance/case. For example, in the factoring problem, the instances are the integers ''n'', and solutions are prime numbers ''p'' that are the nontrivial prime factors of ''n''. Computational problems are one of the main objects of study in theoretical computer science. The field of computational complexity theory attempts to determine the amount of resources ( computational complexity) solving a given problem will require and explain why some problems are intractable or undecidable. Computational problems belong to complexity classes that define broadly the resources (e.g. time, space/memory, e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Search Problem
In computational complexity theory and computability theory, a search problem is a type of computational problem represented by a binary relation. If ''R'' is a binary relation such that field(''R'') ⊆ Γ+ and ''T'' is a Turing machine, then ''T'' calculates ''R'' if: * If ''x'' is such that there is some ''y'' such that ''R''(''x'', ''y'') then ''T'' accepts ''x'' with output ''z'' such that ''R''(''x'', ''z'') (there may be multiple ''y'', and ''T'' need only find one of them) * If ''x'' is such that there is no ''y'' such that ''R''(''x'', ''y'') then ''T'' rejects ''x'' Intuitively, the problem consists in finding structure "y" in object "x". An algorithm is said to solve the problem if at least one corresponding structure exists, and then one occurrence of this structure is made output; otherwise, the algorithm stops with an appropriate output ("Item not found" or any message of the like). Such problems occur very frequently in graph theory, for example, where searching g ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Counting Function
Counting is the process of determining the number of elements of a finite set of objects, i.e., determining the size of a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by a unit for every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related term ''enumeration'' refers to uniquely identifying the elements of a finite (combinatorial) set or infinite set by assigning a number to each element. Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...). There is archaeological evidence suggesting that humans have been countin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Cook Reduction
In computability theory, a Turing reduction from a decision problem A to a decision problem B is an oracle machine which decides problem A given an oracle for B (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve A if it had available to it a subroutine for solving ''B''. The concept can be analogously applied to function problems. If a Turing reduction from A to B exists, then every algorithm for B can be used to produce an algorithm for A, by inserting the algorithm for B at each place where the oracle machine computing A queries the oracle for B. However, because the oracle machine may query the oracle a large number of times, the resulting algorithm may require more time asymptotically than either the algorithm for B or the oracle machine computing A. A Turing reduction in which the oracle machine runs in polynomial time is known as a Cook reduction. The first formal definition of relative computability, then called relative reducibility ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Binary Search
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Binary search runs in logarithmic time in the worst case, making O(\log n) comparisons, where n is the number of elements in the array. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched mor ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Nondeterministic Algorithm
In computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different runs due to a race condition. A probabilistic algorithm's behaviors depends on a random number generator. An algorithm that solves a problem in nondeterministic polynomial time can run in polynomial time or exponential time depending on the choices it makes during execution. The nondeterministic algorithms are often used to find an approximation to a solution, when the exact solution would be too costly to obtain using a deterministic one. The notion was introduced by Robert W. Floyd in 1967. Use Often in computational theory, the term "algorithm" refers to a deterministic algorithm. A nondeterministic algorithm is different from its more familiar determi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP (complexity)
In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.''Polynomial time'' refers to how quickly the number of operations needed by an algorithm, relative to the size of the problem, grows. It is therefore a measure of efficiency of an algorithm. An equivalent definition of NP is the set of decision problems ''solvable'' in polynomial time by a nondeterministic Turing machine. This definition is the basis for the abbreviation NP; " nondeterministic, polynomial time". These two definitions are equivalent because the algorithm based on the Turing machine consists of two phases, the first of which consists of a guess abou ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-complete
In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm. Polynomial time refers to an amount of time that is considered "quick" for a de ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Many-one Reduction
In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem L_1 into instances of a second decision problem L_2 where the instance reduced to is in the language L_2 if the initial instance was in its language L_1 and is not in the language L_2 if the initial instance was not in its language L_1. Thus if we can decide whether L_2 instances are in the language L_2, we can decide whether L_1 instances are in its language by applying the reduction and solving L_2. Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that L_1 reduces to L_2 if, in layman's terms L_2 is harder to solve than L_1. That is to say, any algorithm that solves L_2 can also be used as part of a (otherwise relatively simple) program that solves L_1. Many-one reductions are a special case and stronger form of Turing reductions. With many-one red ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]