HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s and
computational problem In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring :"Given a positive integer ''n'', find a nontrivial prime factor of ''n''." is a computational probl ...
s, commonly associated with the usage of the big O notation.


Scope

With respect to
computational resource In computational complexity theory, a computational resource is a resource used by some computational models in the solution of computational problems. The simplest computational resources are computation time, the number of steps necessary t ...
s, asymptotic
time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
and asymptotic
space complexity The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it ex ...
are commonly estimated. Other asymptotically estimated behavior include
circuit complexity In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circui ...
and various measures of
parallel computation Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
, such as the number of (parallel) processors. Since the ground-breaking 1965 paper by
Juris Hartmanis Juris Hartmanis (July 5, 1928 – July 29, 2022) was a Latvian-born American computer scientist and computational theorist who, with Richard E. Stearns, received the 1993 ACM Turing Award "in recognition of their seminal paper which establis ...
and Richard E. Stearns and the 1979 book by Michael Garey and
David S. Johnson David Stifler Johnson (December 9, 1945 – March 8, 2016) was an American computer scientist specializing in algorithms and optimization. He was the head of the Algorithms and Optimization Department of AT&T Labs Research from 1988 to 2013, an ...
on
NP-completeness In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
, Michael Garey, and
David S. Johnson David Stifler Johnson (December 9, 1945 – March 8, 2016) was an American computer scientist specializing in algorithms and optimization. He was the head of the Algorithms and Optimization Department of AT&T Labs Research from 1988 to 2013, an ...
: ''Computers and Intractability: A Guide to the Theory of NP-Completeness.'' New York: W. H. Freeman & Co., 1979.
the term " computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity. Further, unless specified otherwise, the term "computational complexity" usually refers to the
upper bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an eleme ...
for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g.. O(n^3). Other types of (asymptotic) computational complexity estimates are
lower bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an eleme ...
s (" Big Omega" notation; e.g., Ω(''n'')) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "
big Theta Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
"; e.g., Θ(''n'' log ''n'')). A further
tacit assumption A tacit assumption or implicit assumption is an assumption that underlies a logical argument, course of action, decision, or judgment that is not explicitly voiced nor necessarily understood by the decision maker or judge. These assumptions may b ...
is that the
worst case analysis Worst case analysis was, from 1978 until 1986, a doctrine under which mandated that an environmental impact statement include such an analysis: It led to a 1989 SCOTUS decision, written by John Paul Stevens and reported in ''Robertson v. Methow ...
of computational complexity is in question unless stated otherwise. An alternative approach is probabilistic analysis of algorithms.


Types of algorithms considered

In most practical cases
deterministic algorithm In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far ...
s or randomized algorithms are discussed, although
theoretical computer science computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumscribe the ...
also considers nondeterministic algorithms and other advanced
models of computation In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how ...
.


See also

* Asymptotically optimal algorithm


References

{{reflist Computational complexity theory