HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
, asymptotic computational complexity is the usage of
asymptotic analysis In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior. As an illustration, suppose that we are interested in the properties of a function as becomes very large. If , then as beco ...
for the estimation of computational complexity of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
s and
computational problem In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring :"Given a positive integer ''n'', find a nontrivial prime factor of ''n''." is a computational probl ...
s, commonly associated with the usage of the
big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
.


Scope

With respect to
computational resource In computational complexity theory, a computational resource is a resource used by some computational models in the solution of computational problems. The simplest computational resources are computation time, the number of steps necessary to s ...
s, asymptotic
time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
and asymptotic
space complexity The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it ex ...
are commonly estimated. Other asymptotically estimated behavior include
circuit complexity In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circui ...
and various measures of
parallel computation Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different for ...
, such as the number of (parallel) processors. Since the ground-breaking 1965 paper by
Juris Hartmanis Juris Hartmanis (July 5, 1928 – July 29, 2022) was a Latvian-born American computer scientist and computational theorist who, with Richard E. Stearns, received the 1993 ACM Turing Award "in recognition of their seminal paper which established ...
and
Richard E. Stearns Richard Edwin Stearns (born July 5, 1936) is a prominent computer scientist who, with Juris Hartmanis, received the 1993 ACM Turing Award "in recognition of their seminal paper which established the foundations for the field of computational co ...
and the 1979 book by
Michael Garey Michael Randolph Garey (born November 19, 1945) is a computer science researcher, and co-author (with David S. Johnson) of '' Computers and Intractability: A Guide to the Theory of NP-completeness''. He and Johnson received the 1979 Frederick W ...
and
David S. Johnson David Stifler Johnson (December 9, 1945 – March 8, 2016) was an American computer scientist specializing in algorithms and optimization. He was the head of the Algorithms and Optimization Department of AT&T Labs Research from 1988 to 2013, an ...
on
NP-completeness In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
,
Michael Garey Michael Randolph Garey (born November 19, 1945) is a computer science researcher, and co-author (with David S. Johnson) of '' Computers and Intractability: A Guide to the Theory of NP-completeness''. He and Johnson received the 1979 Frederick W ...
, and
David S. Johnson David Stifler Johnson (December 9, 1945 – March 8, 2016) was an American computer scientist specializing in algorithms and optimization. He was the head of the Algorithms and Optimization Department of AT&T Labs Research from 1988 to 2013, an ...
: ''Computers and Intractability: A Guide to the Theory of NP-Completeness.'' New York: W. H. Freeman & Co., 1979.
the term "
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
" (of algorithms) has become commonly referred to as asymptotic computational complexity. Further, unless specified otherwise, the term "computational complexity" usually refers to the
upper bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an element ...
for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g.. O(n^3). Other types of (asymptotic) computational complexity estimates are
lower bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an element ...
s (" Big Omega" notation; e.g., Ω(''n'')) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "
big Theta Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landa ...
"; e.g., Θ(''n'' log ''n'')). A further
tacit assumption A tacit assumption or implicit assumption is an assumption that underlies a logical argument, course of action, decision, or judgment that is not explicitly voiced nor necessarily understood by the decision maker or judge. These assumptions may be ...
is that the
worst case analysis Worst case analysis was, from 1978 until 1986, a doctrine under which mandated that an environmental impact statement include such an analysis: It led to a 1989 SCOTUS decision, written by John Paul Stevens and reported in ''Robertson v. Methow ...
of computational complexity is in question unless stated otherwise. An alternative approach is
probabilistic analysis of algorithms Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking ...
.


Types of algorithms considered

In most practical cases
deterministic algorithm In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far ...
s or
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
s are discussed, although
theoretical computer science Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumsc ...
also considers
nondeterministic algorithm In computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave diffe ...
s and other advanced
models of computation In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how ...
.


See also

*
Asymptotically optimal algorithm In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly en ...


References

{{reflist Computational complexity theory