Space Complexity Theory
   HOME

TheInfoList



OR:

In theoretical computer science and
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in
communication complexity In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in
parallel computing Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven
Millennium Prize Problems The Millennium Prize Problems are seven well-known complex mathematical problems selected by the Clay Mathematics Institute in 2000. The Clay Institute has pledged a US$1 million prize for the first correct solution to each problem. According t ...
, is dedicated to the field of computational complexity. Closely related fields in theoretical computer science are
analysis of algorithms In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that re ...
and
computability theory Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since e ...
. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically.


Computational problems


Problem instances

A computational problem can be viewed as an infinite collection of ''instances'' together with a set (possibly empty) of ''solutions'' for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of
primality testing A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating wh ...
. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the ''instance'' is a particular input to the problem, and the ''solution'' is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the
traveling salesman problem The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each cit ...
: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.


Representing problem instances

When considering computational problems, a problem instance is a
string String or strings may refer to: *String (structure), a long flexible structure made from threads twisted together, which is used to tie, bind, or hang other objects Arts, entertainment, and media Films * ''Strings'' (1991 film), a Canadian anim ...
over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set ), and thus the strings are bitstrings. As in a real-world
computer A computer is a machine that can be programmed to Execution (computing), carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as C ...
, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their
adjacency list In graph theory and computer science, an adjacency list is a collection of unordered lists used to represent a finite graph. Each unordered list within an adjacency list describes the set of neighbors of a particular vertex in the graph. This is ...
s in binary. Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.


Decision problems as formal languages

Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either ''yes'' or ''no'', or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer ''yes'', the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.


Function problems

A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the
traveling salesman problem The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each cit ...
and the
integer factorization problem In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are su ...
. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (''a'', ''b'', ''c'') such that the relation ''a'' × ''b'' = ''c'' holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.


Measuring the size of an instance

To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2''n'' vertices compared to the time taken for a graph with ''n'' vertices? If the input size is ''n'', the time taken can be expressed as a function of ''n''. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(''n'') is defined to be the maximum time taken over all inputs of size ''n''. If T(''n'') is a polynomial in ''n'', then the algorithm is said to be a
polynomial time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
algorithm.
Cobham's thesis Cobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds),.. asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is ...
argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm.


Machine models and complexity measures


Turing machine

A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the
Church–Turing thesis In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of comp ...
. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a
RAM machine In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the cou ...
, Conway's Game of Life, cellular automata,
lambda calculus Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation ...
or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as
deterministic Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
s, probabilistic Turing machines, non-deterministic Turing machines,
quantum Turing machine A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computation—that is, any quantum algori ...
s, symmetric Turing machines and
alternating Turing machine In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. ...
s. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
s. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm.


Other machine models

Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example
random-access machine In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the cou ...
s. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that
non-deterministic time In computational complexity theory, the complexity class NTIME(''f''(''n'')) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time ''O''(''f''(''n'')). Here ''O'' is the big O notation, ''f'' is ...
is a very important resource in analyzing computational problems.


Complexity measures

For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the
deterministic Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
is used. The ''time required'' by a deterministic Turing machine ''M'' on input ''x'' is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine ''M'' is said to operate within time ''f''(''n'') if the time required by ''M'' on each input of length ''n'' is at most ''f''(''n''). A decision problem ''A'' can be solved in time ''f''(''n'') if there exists a Turing machine operating in time ''f''(''n'') that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time ''f''(''n'') on a deterministic Turing machine is then denoted by DTIME(''f''(''n'')). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include
communication complexity In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
, circuit complexity, and
decision tree complexity In computational complexity the decision tree model is the model of computation in which an algorithm is considered to be basically a decision tree, i.e., a sequence of ''queries'' or ''tests'' that are done adaptively, so the outcome of the previ ...
. The complexity of an algorithm is often expressed using
big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
.


Best, worst and average case complexity

The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size ''n'' may be faster to solve than others, we define the following complexities: #Best-case complexity: This is the complexity of solving the problem for the best input of size ''n''. #Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size ''n''. # Amortized analysis: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. #Worst-case complexity: This is the complexity of solving the problem for the worst input of size ''n''. The order from cheap to costly is: Best, average (of
discrete uniform distribution In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution wherein a finite number of values are equally likely to be observed; every one of ''n'' values has equal probability 1/''n''. Anothe ...
), amortized, worst. For example, consider the deterministic sorting algorithm quicksort. This solves the problem of sorting a list of integers that is given as the input. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case the algorithm takes time O(''n''2). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is O(''n'' log ''n''). The best case occurs when each pivoting divides the list in half, also needing O(''n'' log ''n'') time.


Upper and lower bounds on the complexity of problems

To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of
analysis of algorithms In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that re ...
. To show an upper bound ''T''(''n'') on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most ''T''(''n''). However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of ''T''(''n'') for a problem requires showing that no algorithm can have time complexity lower than ''T''(''n''). Upper and lower bounds are usually stated using the
big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if ''T''(''n'') = 7''n''2 + 15''n'' + 40, in big O notation one would write ''T''(''n'') = O(''n''2).


Complexity classes


Defining complexity classes

A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: * The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc. * The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits,
quantum Turing machine A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computation—that is, any quantum algori ...
s,
monotone circuit In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circui ...
s, etc. * The resource (or resources) that is being bounded and the bound: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc. Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following: :The set of decision problems solvable by a deterministic Turing machine within time ''f''(''n''). (This complexity class is known as DTIME(''f''(''n'')).) But bounding the computation time above by some concrete function ''f''(''n'') often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in
linear time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP.


Important complexity classes

Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following: The logarithmic-space classes (necessarily) do not take into account the space needed to represent the problem. It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by
Savitch's theorem In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function f\in\Omega(\log(n)), :\mathsf\left(f\lef ...
. Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and
BQP In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances.Michael Nielsen and Isa ...
and
QMA In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the set of languages for which, when a string is in the language, there is a polynomial-size quantum proof (a quantum state) that convinces a polynomial time qu ...
, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems.
ALL All or ALL may refer to: Language * All, an indefinite pronoun in English * All, one of the English determiners * Allar language (ISO 639-3 code) * Allative case (abbreviated ALL) Music * All (band), an American punk rock band * ''All'' (All al ...
is the class of all decision problems.


Hierarchy theorems

For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(''n'') is contained in DTIME(''n''2), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. More precisely, the
time hierarchy theorem In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, ...
states that :\mathsf\big(f(n) \big) \subsetneq \mathsf \big(f(n) \sdot \log^(f(n)) \big). The space hierarchy theorem states that :\mathsf\big(f(n)\big) \subsetneq \mathsf \big(f(n) \sdot \log(f(n)) \big). The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.


Reduction

Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem ''X'' can be solved using an algorithm for ''Y'', ''X'' is no more difficult than ''Y'', and we say that ''X'' ''reduces'' to ''Y''. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as
polynomial-time reduction In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming ...
s or log-space reductions. The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication. This motivates the concept of a problem being hard for a complexity class. A problem ''X'' is ''hard'' for a class of problems ''C'' if every problem in ''C'' can be reduced to ''X''. Thus no problem in ''C'' is harder than ''X'', since an algorithm for ''X'' allows us to solve any problem in ''C''. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
problems. If a problem ''X'' is in ''C'' and hard for ''C'', then ''X'' is said to be ''
complete Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
'' for ''C''. This means that ''X'' is the hardest problem in ''C''. (Since many problems could be equally hard, one might say that ''X'' is one of the hardest problems in ''C''.) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.


Important open problems


P versus NP problem

The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the
Cobham–Edmonds thesis Cobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds),.. asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that ...
. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the
Hamiltonian path problem In the mathematical field of graph theory the Hamiltonian path problem and the Hamiltonian cycle problem are problems of determining whether a Hamiltonian path (a path in an undirected or directed graph that visits each vertex exactly once) or a ...
and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.See If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of
pure mathematics Pure mathematics is the study of mathematical concepts independently of any application outside mathematics. These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, ...
theorems. The P versus NP problem is one of the
Millennium Prize Problems The Millennium Prize Problems are seven well-known complex mathematical problems selected by the Clay Mathematics Institute in 2000. The Clay Institute has pledged a US$1 million prize for the first correct solution to each problem. According t ...
proposed by the
Clay Mathematics Institute The Clay Mathematics Institute (CMI) is a private, non-profit foundation (nonprofit), foundation dedicated to increasing and disseminating mathematics, mathematical knowledge. Formerly based in Peterborough, New Hampshire, the corporate address i ...
. There is a US$1,000,000 prize for resolving the problem.


Problems in NP not known to be in P or NP-complete

It was shown by Ladner that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the
integer factorization problem In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are su ...
are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are
isomorphic In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word is ...
. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and
Eugene Luks Eugene Michael Luks (born circa 1940) is an American mathematician and computer scientist, a professor emeritus of University of Oregon Department of Computer and Information Science, computer and information science at the University of Oregon. He ...
has run time O(2^) for graphs with ''n'' vertices, although some recent work by Babai offers some potentially new perspectives on this. The
integer factorization problem In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are su ...
is the computational problem of determining the
prime factorization In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are suf ...
of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than ''k''. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP will equal co-NP). The best known algorithm for integer factorization is the
general number field sieve In number theory, the general number field sieve (GNFS) is the most efficient classical algorithm known for factoring integers larger than . Heuristically, its complexity for factoring an integer (consisting of bits) is of the form :\exp\left( ...
, which takes time O(e^) to factor an odd integer ''n''. However, the best known
quantum algorithm In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequ ...
for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.


Separations between other complexity classes

Many known complexity classes are suspected to be unequal, but this has not been proved. For instance P ⊆ NP ⊆ PP ⊆ PSPACE, but it is possible that P = PSPACE. If P is not equal to NP, then P is not equal to PSPACE either. Since there are many known complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, co-NP is the class containing the complement problems (i.e. problems with the ''yes''/''no'' answers reversed) of NP problems. It is believed that NP is not equal to co-NP; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then P is not equal to NP, since P=co-P. Thus if P=NP we would have co-P=co-NP whence NP=P=co-P=co-NP. Similarly, it is not known if L (the set of all problems that can be solved in logarithmic space) is strictly contained in P or equal to P. Again, there are many complexity classes between the two, such as NL and NC, and it is not known if they are distinct or equal classes. It is suspected that P and BPP are equal. However, it is currently open if BPP = NEXP.


Intractability

A problem that can be solved in theory (e.g. given large but finite resources, especially time), but for which in practice ''any'' solution takes too many resources to be useful, is known as an . Conversely, a problem that can be solved in practice is called a , literally "a problem that can be handled". The term '' infeasible'' (literally "cannot be done") is sometimes used interchangeably with '' intractable'', though this risks confusion with a
feasible solution In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, poten ...
in
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
. Tractable problems are frequently identified with problems that have polynomial-time solutions (P, PTIME); this is known as the
Cobham–Edmonds thesis Cobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds),.. asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that ...
. Problems that are known to be intractable in this sense include those that are
EXPTIME In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2''p''(''n'')) time, wh ...
-hard. If NP is not the same as P, then
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes 2''n'' operations before halting. For small ''n'', say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes 1.0001''n'' operations is practical until ''n'' gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, ''n''15, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even ''n''3 or ''n''2 algorithms are often impractical on realistic sizes of problems.


Continuous complexity theory

Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is
information based complexity Information-based complexity (IBC) studies optimal algorithms and computational complexity for the continuous problems that arise in physical science, economics, engineering, and mathematical finance. IBC has studied such continuous problems as pa ...
. Continuous complexity theory can also refer to complexity theory of the use of
analog computation An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (''analog signals'') to model the problem being solved. I ...
, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.


History

An early example of algorithm complexity analysis is the running time analysis of the
Euclidean algorithm In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's ''Topics in Algebra'' and Serge Lang's ''Algebra'', use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is an effi ...
done by Gabriel Lamé in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and
Richard E. Stearns Richard Edwin Stearns (born July 5, 1936) is a prominent computer scientist who, with Juris Hartmanis, received the 1993 ACM Turing Award "in recognition of their seminal paper which established the foundations for the field of computational co ...
, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965
Edmonds Edmonds may refer to: * Edmonds (surname), a surname (including a list of people with the surname) * Edmonds, Washington, a city in Washington, US ** Edmonds station (Washington), a passenger train station in Washington, US * Edmonds station (SkyTra ...
suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size. Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of
linear bounded automata In computer science, a linear bounded automaton (plural linear bounded automata, abbreviated LBA) is a restricted form of Turing machine. Operation A linear bounded automaton is a nondeterministic Turing machine that satisfies the following three ...
(Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as
Hisao Yamada was a Japanese computer scientist, known for his influential contributions to theoretical computer science, as well as for the development of Japanese keyboard layouts, a challenging practical problem. From 1972 to 1991, he was professor of the f ...
's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers: In 1967, Manuel Blum formulated a set of
axiom An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or f ...
s (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete.


See also

*
Context of computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
* Descriptive complexity theory * Game complexity *
Leaf language In computational complexity theory, a leaf language is a method of characterizing a complexity class by formalizing what it means for a machine to "accept" an input. Several complexity classes are typically defined in terms of a polynomial-time ...
* Limits of computation * List of complexity classes *
List of computability and complexity topics This is a list of computability and complexity topics, by Wikipedia page. Computability theory is the part of the theory of computation that deals with what can be computed, in principle. Computational complexity theory deals with how hard computa ...
* List of important publications in theoretical computer science *
List of unsolved problems in computer science This article is a list of notable unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known, or when experts in the field disagree about proposed solutions. Computational complexity * ...
* Parameterized complexity *
Proof complexity In logic and theoretical computer science, and specifically proof theory and computational complexity theory, proof complexity is the field aiming to understand and analyse the computational resources that are required to prove or refute statements. ...
*
Quantum complexity theory Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems i ...
*
Structural complexity theory In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves t ...
* Transcomputational problem * Computational complexity of mathematical operations


Works on complexity

*


References


Citations


Textbooks

* * * * * * * *


Surveys

* * * *


External links


The Complexity Zoo
*
What are the most important results (and papers) in complexity theory that every one should know?

Scott Aaronson: Why Philosophers Should Care About Computational Complexity
{{DEFAULTSORT:Computational Complexity Theory Computational fields of study