HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
, the average-case complexity of an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with
worst-case complexity In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as in asymptotic nota ...
which considers the maximal complexity of the algorithm over all possible inputs. There are three primary motivations for studying average-case complexity.O. Goldreich and S. Vadhan, Special issue on worst-case versus average-case complexity, Comput. Complex. 16, 325–330, 2007. First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Second, average-case complexity analysis provides tools and techniques to generate hard instances of problems which can be utilized in areas such as
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
and
derandomization A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
. Third, average-case complexity allows discriminating the most efficient algorithm in practice among algorithms of equivalent best case complexity (for instance
Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
). Average-case analysis requires a notion of an "average" input to an algorithm, which leads to the problem of devising a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
over inputs. Alternatively, a
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
can be used. The analysis of such algorithms leads to the related notion of an expected complexity.


History and background

The average-case performance of algorithms has been studied since modern notions of computational efficiency were developed in the 1950s. Much of this initial work focused on problems for which worst-case polynomial time algorithms were already known.A. Bogdanov and L. Trevisan, "Average-Case Complexity," Foundations and Trends in Theoretical Computer Science, Vol. 2, No 1 (2006) 1–106. In 1973,
Donald Knuth Donald Ervin Knuth ( ; born January 10, 1938) is an American computer scientist, mathematician, and professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of computer sci ...
D. Knuth,
The Art of Computer Programming ''The Art of Computer Programming'' (''TAOCP'') is a comprehensive monograph written by the computer scientist Donald Knuth presenting programming algorithms and their analysis. Volumes 1–5 are intended to represent the central core of comp ...
. Vol. 3, Addison-Wesley, 1973.
published Volume 3 of the
Art of Computer Programming ''The Art of Computer Programming'' (''TAOCP'') is a comprehensive monograph written by the computer scientist Donald Knuth presenting programming algorithms and their analysis. Volumes 1–5 are intended to represent the central core of comput ...
which extensively surveys average-case performance of algorithms for problems solvable in worst-case polynomial time, such as sorting and median-finding. An efficient algorithm for
NP-complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
problems is generally characterized as one which runs in polynomial time for all inputs; this is equivalent to requiring efficient worst-case complexity. However, an algorithm which is inefficient on a "small" number of inputs may still be efficient for "most" inputs that occur in practice. Thus, it is desirable to study the properties of these algorithms where the average-case complexity may differ from the worst-case complexity and find methods to relate the two. The fundamental notions of average-case complexity were developed by
Leonid Levin Leonid Anatolievich Levin ( ; russian: Леони́д Анато́льевич Ле́вин; uk, Леоні́д Анато́лійович Ле́він; born November 2, 1948) is a Soviet-American mathematician and computer scientist. He is kn ...
in 1986 when he published a one-page paperL. Levin, "Average case complete problems," SIAM Journal on Computing, vol. 15, no. 1, pp. 285–286, 1986. defining average-case complexity and completeness while giving an example of a complete problem for distNP, the average-case analogue of NP.


Definitions


Efficient average-case complexity

The first task is to precisely define what is meant by an algorithm which is efficient "on average". An initial attempt might define an efficient average-case algorithm as one which runs in expected polynomial time over all possible inputs. Such a definition has various shortcomings; in particular, it is not robust to changes in the computational model. For example, suppose algorithm A runs in time tA(x) on input x and algorithm B runs in time tA(x)2 on input x; that is, B is quadratically slower than A. Intuitively, any definition of average-case efficiency should capture the idea that A is efficient-on-average if and only if B is efficient on-average. Suppose, however, that the inputs are drawn randomly from the uniform distribution of strings with length n, and that A runs in time n2 on all inputs except the string 1n for which A takes time 2n. Then it can be easily checked that the expected running time of A is polynomial but the expected running time of B is exponential. To create a more robust definition of average-case efficiency, it makes sense to allow an algorithm A to run longer than polynomial time on some inputs but the fraction of inputs on which A requires larger and larger running time becomes smaller and smaller. This intuition is captured in the following formula for average polynomial running time, which balances the polynomial trade-off between running time and fraction of inputs: : \Pr_ \left _A(x) \geq t \right\leq \frac for every n, t > 0 and polynomial p, where tA(x) denotes the running time of algorithm A on input x, and ε is a positive constant value.J. Wang, "Average-case computational complexity theory," Complexity Theory Retrospective II, pp. 295–328, 1997. Alternatively, this can be written as : E_ \left \frac \right\leq C for some constants C and ε, where n = , x, .S. Arora and B. Barak, Computational Complexity: A Modern Approach, Cambridge University Press, New York, NY, 2009. In other words, an algorithm A has good average-case complexity if, after running for tA(n) steps, A can solve all but a \frac fraction of inputs of length n, for some ε, c > 0.


Distributional problem

The next step is to define the "average" input to a particular problem. This is achieved by associating the inputs of each problem with a particular probability distribution. That is, an "average-case" problem consists of a language L and an associated probability distribution D which forms the pair (L, D). The two most common classes of distributions which are allowed are: #Polynomial-time computable distributions (P-computable): these are distributions for which it is possible to compute the cumulative density of any given input x. More formally, given a probability distribution μ and a string x ∈ n it is possible to compute the value \mu(x) = \sum\limits_ \Pr /math> in polynomial time. This implies that Pr is also computable in polynomial time. #Polynomial-time samplable distributions (P-samplable): these are distributions from which it is possible to draw random samples in polynomial time. These two formulations, while similar, are not equivalent. If a distribution is P-computable it is also P-samplable, but the converse is not true if P ≠ P#P.


AvgP and distNP

A distributional problem (L, D) is in the complexity class AvgP if there is an efficient average-case algorithm for L, as defined above. The class AvgP is occasionally called distP in the literature. A distributional problem (L, D) is in the complexity class distNP if L is in NP and D is P-computable. When L is in NP and D is P-samplable, (L, D) belongs to sampNP. Together, AvgP and distNP define the average-case analogues of P and NP, respectively.


Reductions between distributional problems

Let (L,D) and (L',D') be two distributional problems. (L, D) average-case reduces to (L', D') (written (L, D) ≤AvgP (L', D')) if there is a function f that for every n, on input x can be computed in time polynomial in n and #(Correctness) x ∈ L if and only if f(x) ∈ L' #(Domination) There are polynomials p and m such that, for every n and y, \sum\limits_ D_n(x) \leq p(n)D'_(y) The domination condition enforces the notion that if problem (L, D) is hard on average, then (L', D') is also hard on average. Intuitively, a reduction should provide a way to solve an instance x of problem L by computing f(x) and feeding the output to the algorithm which solves L'. Without the domination condition, this may not be possible since the algorithm which solves L in polynomial time on average may take super-polynomial time on a small number of inputs but f may map these inputs into a much larger set of D' so that algorithm A' no longer runs in polynomial time on average. The domination condition only allows such strings to occur polynomially as often in D'.


DistNP-complete problems

The average-case analogue to NP-completeness is distNP-completeness. A distributional problem (L', D') is distNP-complete if (L', D') is in distNP and for every (L, D) in distNP, (L, D) is average-case reducible to (L', D'). An example of a distNP-complete problem is the Bounded Halting Problem, BH, defined as follows: BH = In his original paper, Levin showed an example of a distributional tiling problem that is average-case NP-complete. A survey of known distNP-complete problems is available online. One area of active research involves finding new distNP-complete problems. However, finding such problems can be complicated due to a result of Gurevich which shows that any distributional problem with a flat distribution cannot be distNP-complete unless EXP = NEXP.Y. Gurevich, "Complete and incomplete randomized NP problems", Proc. 28th Annual Symp. on Found. of Computer Science, IEEE (1987), pp. 111–117. (A flat distribution μ is one for which there exists an ε > 0 such that for any x, μ(x) ≤ 2−, x, ε.) A result by Livne shows that all natural NP-complete problems have DistNP-complete versions.N. Livne, "All Natural NP-Complete Problems Have Average-Case Complete Versions," Computational Complexity (2010) 19:477. https://doi.org/10.1007/s00037-010-0298-9 However, the goal of finding a natural distributional problem that is DistNP-complete has not yet been achieved.O. Goldreich, "Notes on Levin's theory of average-case complexity," Technical Report TR97-058, Electronic Colloquium on Computational Complexity, 1997.


Applications


Sorting algorithms

As mentioned above, much early work relating to average-case complexity focused on problems for which polynomial-time algorithms already existed, such as sorting. For example, many sorting algorithms which utilize randomness, such as
Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
, have a worst-case running time of O(n2), but an average-case running time of O(nlog(n)), where n is the length of the input to be sorted.Cormen, Thomas H.; Leiserson, Charles E., Rivest, Ronald L., Stein, Clifford (2009) 990 Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. .


Cryptography

For most problems, average-case complexity analysis is undertaken to find efficient algorithms for a problem that is considered difficult in the worst-case. In cryptographic applications, however, the opposite is true: the worst-case complexity is irrelevant; we instead want a guarantee that the average-case complexity of every algorithm which "breaks" the cryptographic scheme is inefficient.J. Katz and Y. Lindell, Introduction to Modern Cryptography (Chapman and Hall/Crc Cryptography and Network Security Series), Chapman and Hall/CRC, 2007. Thus, all secure cryptographic schemes rely on the existence of
one-way functions In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, s ...
. Although the existence of one-way functions is still an open problem, many candidate one-way functions are based on hard problems such as
integer factorization In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are suf ...
or computing the
discrete log In mathematics, for given real numbers ''a'' and ''b'', the logarithm log''b'' ''a'' is a number ''x'' such that . Analogously, in any group ''G'', powers ''b'k'' can be defined for all integers ''k'', and the discrete logarithm log''b' ...
. Note that it is not desirable for the candidate function to be NP-complete since this would only guarantee that there is likely no efficient algorithm for solving the problem in the worst case; what we actually want is a guarantee that no efficient algorithm can solve the problem over random inputs (i.e. the average case). In fact, both the integer factorization and discrete log problems are in NP ∩ coNP, and are therefore not believed to be NP-complete. The fact that all of cryptography is predicated on the existence of average-case intractable problems in NP is one of the primary motivations for studying average-case complexity.


Other results

In 1990, Impagliazzo and Levin showed that if there is an efficient average-case algorithm for a distNP-complete problem under the uniform distribution, then there is an average-case algorithm for every problem in NP under any polynomial-time samplable distribution.R. Impagliazzo and L. Levin, "No Better Ways to Generate Hard NP Instances than Picking Uniformly at Random," in Proceedings of the 31st IEEE Sympo- sium on Foundations of Computer Science, pp. 812–821, 1990. Applying this theory to natural distributional problems remains an outstanding open question. In 1992, Ben-David et al. showed that if all languages in distNP have good-on-average decision algorithms, they also have good-on-average search algorithms. Further, they show that this conclusion holds under a weaker assumption: if every language in NP is easy on average for decision algorithms with respect to the uniform distribution, then it is also easy on average for search algorithms with respect to the uniform distribution.S. Ben-David, B. Chor, O. Goldreich, and M. Luby, "On the theory of average case complexity," Journal of Computer and System Sciences, vol. 44, no. 2, pp. 193–219, 1992. Thus, cryptographic one-way functions can exist only if there are distNP problems over the uniform distribution that are hard on average for decision algorithms. In 1993, Feigenbaum and Fortnow showed that it is not possible to prove, under non-adaptive random reductions, that the existence of a good-on-average algorithm for a distNP-complete problem under the uniform distribution implies the existence of worst-case efficient algorithms for all problems in NP.J. Feigenbaum and L. Fortnow, "Random-self-reducibility of complete sets," SIAM Journal on Computing, vol. 22, pp. 994–1005, 1993. In 2003, Bogdanov and Trevisan generalized this result to arbitrary non-adaptive reductions.A. Bogdanov and L. Trevisan, "On worst-case to average-case reductions for NP problems," in Proceedings of the 44th IEEE Symposium on Foundations of Computer Science, pp. 308–317, 2003. These results show that it is unlikely that any association can be made between average-case complexity and worst-case complexity via reductions.


See also

*
Probabilistic analysis of algorithms Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, ...
*
NP-complete problems In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
*
Worst-case complexity In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as in asymptotic nota ...
*
Amortized analysis In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case ...
*
Best, worst and average case In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...


References


Further reading

The literature of average case complexity includes the following work: *. *. *. *. *. *. See als
1989 draft
*. *. *. *. *. *{{citation , last = Impagliazzo , first = Russell , author-link = Russell Impagliazzo , date = April 17, 1995 , publisher =
University of California, San Diego The University of California, San Diego (UC San Diego or colloquially, UCSD) is a public land-grant research university in San Diego, California. Established in 1960 near the pre-existing Scripps Institution of Oceanography, UC San Diego is t ...
, title = A personal view of average-case complexity , url = http://www-cse.ucsd.edu/~russell/average.ps. *Paul E. Black
"Θ"
in Dictionary of Algorithms and Data Structures nlineaul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004.Retrieved Feb. 20/09. *Christos Papadimitriou (1994). Computational Complexity. Addison-Wesley. Randomized algorithms