Polynomial Creativity
   HOME
*





Polynomial Creativity
In computational complexity theory, polynomial creativity is a theory analogous to the theory of creative sets in recursion theory and mathematical logic. The are a family of formal languages in the complexity class NP whose complements certifiably do not have nondeterministic recognition algorithms. It is generally believed that NP is unequal to co-NP (the class of complements of languages in NP), which would imply more strongly that the complements of all NP-complete languages do not have polynomial-time nondeterministic recognition algorithms. However, for the sets, the lack of a (more restricted) recognition algorithm can be proven, whereas a proof that remains elusive. The sets are conjectured to form counterexamples to the Berman–Hartmanis conjecture on isomorphism of NP-complete sets. It is NP-complete to test whether an input string belongs to any one of these languages, but no polynomial time isomorphisms between all such languages and other NP-complete languages are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Function Composition
In mathematics, function composition is an operation that takes two functions and , and produces a function such that . In this operation, the function is applied to the result of applying the function to . That is, the functions and are composed to yield a function that maps in domain to in codomain . Intuitively, if is a function of , and is a function of , then is a function of . The resulting ''composite'' function is denoted , defined by for all in . The notation is read as " of ", " after ", " circle ", " round ", " about ", " composed with ", " following ", " then ", or " on ", or "the composition of and ". Intuitively, composing functions is a chaining process in which the output of function feeds the input of function . The composition of functions is a special case of the composition of relations, sometimes also denoted by \circ. As a result, all properties of composition of relations are true of composition of functions, such as the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Theoretical Computer Science (journal)
''Theoretical Computer Science'' (TCS) is a computer science journal published by Elsevier, started in 1975 and covering theoretical computer science. The journal publishes 52 issues a year. It is abstracted and indexed by Scopus and the Science Citation Index. According to the Journal Citation Reports, its 2020 impact factor The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as i ... is 0.827. References Computer science journals Elsevier academic journals Publications established in 1975 {{comp-sci-theory-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


SIAM Journal On Computing
The ''SIAM Journal on Computing'' is a scientific journal focusing on the mathematical and formal aspects of computer science. It is published by the Society for Industrial and Applied Mathematics (SIAM). Although its official ISO abbreviation is ''SIAM J. Comput.'', its publisher and contributors frequently use the shorter abbreviation ''SICOMP''. SICOMP typically hosts the special issues of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) and the Annual ACM Symposium on Theory of Computing (STOC), where about 15% of papers published in FOCS and STOC each year are invited to these special issues. For example, Volume 48 contains 11 out of 85 papers published in FOCS 2016. References * External linksSIAM Journal on Computing
on

Oracle Machine
In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to solve certain problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used. Oracles An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a given computational problem: * A decision problem is represented as a set ''A'' of natural numbers (or strings). An instance of the problem is an arbitrary natural number (or string ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Satisfiability Problem
In mathematical logic, a formula is ''satisfiable'' if it is true under some assignment of values to its variables. For example, the formula x+3=y is satisfiable because it is true when x=3 and y=6, while the formula x+1=x is not satisfiable over the integers. The dual concept to satisfiability is validity; a formula is ''valid'' if every assignment of values to its variables makes the formula true. For example, x+3=3+x is valid over the integers, but x+3=y is not. Formally, satisfiability is studied with respect to a fixed logic defining the syntax of allowed symbols, such as first-order logic, second-order logic or propositional logic. Rather than being syntactic, however, satisfiability is a semantic property because it relates to the ''meaning'' of the symbols, for example, the meaning of + in a formula such as x+1=x. Formally, we define an interpretation (or model) to be an assignment of values to the variables and an assignment of meaning to all other non-logical symbols, a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Alan Selman
Alan Louis Selman (April 2, 1941 – January 22, 2021) was a mathematician and theoretical computer scientist known for his research on structural complexity theory, the study of computational complexity in terms of the relation between complexity classes rather than individual algorithmic problems. Education and career Selman was a graduate of the City College of New York. He earned a master's degree at the University of California, Berkeley before completing his Ph.D. in 1970 at Pennsylvania State University. His dissertation, ''Arithmetical Reducibilities and Sets of Formulas Valid in Finite Structures'', was supervised by Paul Axt, a student of Stephen Cole Kleene. He became a postdoctoral researcher at Carnegie Mellon University, and an assistant professor of mathematics at Florida State University, before moving to the computer science department of Iowa State University, eventually becoming a full professor there. In the late 1980s he moved to Northeastern Universit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


One-way Permutation
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way (see Theoretical definition, below). The existence of such one-way functions is still an open conjecture. Their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science.Oded Goldreich (2001). Foundations of Cryptography: Volume 1, Basic Tools,draft availablefrom author's site). Cambridge University Press. . (see als The converse is not known to be true, i.e. the existence of a proof that P≠NP would not directly imply the existence of one-way functions. In applied contexts, the terms "easy" and "hard" are usu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Juris Hartmanis
Juris Hartmanis (July 5, 1928 – July 29, 2022) was a Latvian-born American computer scientist and computational theorist who, with Richard E. Stearns, received the 1993 ACM Turing Award "in recognition of their seminal paper which established the foundations for the field of computational complexity theory". Life and career Hartmanis was born in Latvia on July 5, 1928. He was a son of , a general in the Latvian Army, and Irma Marija Hartmane. He was the younger brother of the poet Astrid Ivask. After the Soviet Union occupied Latvia in 1940, Mārtiņš Hartmanis was arrested by the Soviets and died in a prison. Later in World War II, the wife and children of Mārtiņš Hartmanis left Latvia in 1944 as refugees, fearing for their safety if the Soviet Union took over Latvia again. They first moved to Germany, where Juris Hartmanis received the equivalent of a master's degree in physics from the University of Marburg. He then moved to the United States, where in 1951 he recei ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Parsimonious Reduction
In computational complexity theory and game complexity, a parsimonious reduction is a transformation from one problem to another (a reduction) that preserves the number of solutions. Informally, it is a bijection between the respective sets of solutions of two problems. A general reduction from problem A to problem B is a transformation that guarantees that whenever A has a solution B also has ''at least one'' solution and vice versa. A parsimonious reduction guarantees that for every solution of A, there exists ''a unique solution'' of B and vice versa. Parsimonious reductions are commonly used in computational complexity for proving the hardness of counting problems, for counting complexity classes such as #P. Additionally, they are used in game complexity, as a way to design hard puzzles that have a unique solution, as many types of puzzles require. Formal definition Let x be an instance of problem X. A ''Parsimonious reduction'' R from problem X to problem Y is a reductio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Many-one Reduction
In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem L_1 into instances of a second decision problem L_2 where the instance reduced to is in the language L_2 if the initial instance was in its language L_1 and is not in the language L_2 if the initial instance was not in its language L_1. Thus if we can decide whether L_2 instances are in the language L_2, we can decide whether L_1 instances are in its language by applying the reduction and solving L_2. Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that L_1 reduces to L_2 if, in layman's terms L_2 is harder to solve than L_1. That is to say, any algorithm that solves L_2 can also be used as part of a (otherwise relatively simple) program that solves L_1. Many-one reductions are a special case and stronger form of Turing reductions. With many-one red ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial-time Reduction
In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second. A polynomial-time reduction proves that the first problem is no more difficult than the second one, because whenever an efficient algorithm exists for the second problem, one exists for the first problem as well. By contraposition, if no efficient algorithm exists for the first problem, none exists for the second either. Polynomial-time reductions are frequently used in complexity theory for defining both complexity classes and complete problems ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]