HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, best, worst, and average cases of a given
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
express what the
resource Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified upon thei ...
usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e.
time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
, but could also be memory or some other resource. Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n. Average case is the function which performs an average number of steps on input data of n elements. In
real-time computing Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constrai ...
, the worst-case execution time is often of particular concern since it is important to know how much time might be needed ''in the worst case'' to guarantee that the algorithm will always finish on time.
Average performance In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...
and
worst-case performance In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, ...
are the most used in algorithm analysis. Less widely found is
best-case performance In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, ...
, but it does have uses: for example, where the best cases of individual tasks are known, they can be used to improve the accuracy of an overall worst-case analysis.
Computer scientist A computer scientist is a person who is trained in the academic study of computer science. Computer scientists typically work on the theoretical side of computation, as opposed to the hardware side on which computer engineers mainly focus (a ...
s use
probabilistic analysis Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking ...
techniques, especially
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
, to determine expected running times. The terms are used in other contexts; for example the worst- and best-case outcome of an epidemic, worst-case temperature to which an electronic circuit element is exposed, etc. Where components of specified
tolerance Tolerance or toleration is the state of tolerating, or putting up with, conditionally. Economics, business, and politics * Toleration Party, a historic political party active in Connecticut * Tolerant Systems, the former name of Veritas Software ...
are used, devices must be designed to work properly with the worst-case combination of tolerances and external conditions.


Best-case performance for algorithm

The term ''best-case performance'' is used in computer science to describe an algorithm's behavior under optimal conditions. For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. Development and choice of algorithms is rarely based on best-case performance: most academic and commercial enterprises are more interested in improving
Average-case complexity In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case com ...
and
worst-case performance In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, ...
. Algorithms may also be trivially modified to have good best-case running time by hard-coding solutions to a finite set of inputs, making the measure almost meaningless.


Worst-case versus amortized versus average-case performance

Worst-case performance analysis and average-case performance analysis have some similarities, but in practice usually require different tools and approaches. Determining what ''typical input'' means is difficult, and often that average input has properties which make it difficult to characterise mathematically (consider, for instance, algorithms that are designed to operate on strings of text). Similarly, even when a sensible description of a particular "average case" (which will probably only be applicable for some uses of the algorithm) is possible, they tend to result in more difficult analysis of equations. Worst-case analysis gives a ''safe'' analysis (the worst case is never underestimated), but one which can be overly ''pessimistic'', since there may be no (realistic) input that would take this many steps. In some situations it may be necessary to use a pessimistic analysis in order to guarantee safety. Often however, a pessimistic analysis may be too pessimistic, so an analysis that gets closer to the real value but may be optimistic (perhaps with some known low probability of failure) can be a much more practical approach. One modern approach in academic theory to bridge the gap between worst-case and average-case analysis is called
smoothed analysis In theoretical computer science, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable research, for problems ranging from mathematical ...
. When analyzing algorithms which often take a small time to complete, but periodically require a much larger time,
amortized analysis In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case ...
can be used to determine the worst-case running time over a (possibly infinite) series of operations. This amortized cost can be much closer to the average cost, while still providing a guaranteed upper limit on the running time. So e.g.
online algorithm In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an o ...
s are frequently based on amortized analysis. The worst-case analysis is related to the worst-case complexity.


Practical consequences

Many algorithms with bad worst-case performance have good average-case performance. For problems we want to solve, this is a good thing: we can hope that the particular instances we care about are average. For
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or '' -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adv ...
, this is very bad: we want typical instances of a cryptographic problem to be hard. Here methods like
random self-reducibility Random self-reducibility (RSR) is the rule that a good algorithm for the average case implies a good algorithm for the worst case. RSR is the ability to solve all instances of a problem by solving a large fraction of the instances. Definition If f ...
can be used for some specific problems to show that the worst case is no harder than the average case, or, equivalently, that the average case is no easier than the worst case. On the other hand, some data structures like
hash table In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values. A hash table uses a hash function to compute an ''index'', ...
s have very poor worst-case behaviors, but a well written hash table of sufficient size will statistically never give the worst case; the average number of operations performed follows an exponential decay curve, and so the run time of an operation is statistically bounded.


Examples


Sorting algorithms

*
Insertion sort Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. Ho ...
applied to a list of ''n'' elements, assumed to be all different and initially in random order. On average, half the elements in a list ''A''1 ... ''A''''j'' are less than element ''A''''j''+1, and half are greater. Therefore, the algorithm compares the (''j'' + 1)th element to be inserted on the average with half the already sorted sub-list, so ''t''''j'' = ''j''/2. Working out the resulting average-case running time yields a quadratic function of the input size, just like the worst-case running time. *
Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
applied to a list of ''n'' elements, again assumed to be all different and initially in random order. This popular
sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important ...
has an average-case performance of O(''n'' log(''n'')), which contributes to making it a very fast algorithm in practice. But given a worst-case input, its performance degrades to O(''n''2). Also, when implemented with the "shortest first" policy, the worst-case space complexity is instead bounded by O(log(''n'')). * Heapsort has O(n) time when all elements are the same. Heapify takes O(n) time and then removing elements from the heap is O(1) time for each of the n elements. The run time grows to O(nlog(n)) if all elements must be distinct. *
Bogosort In computer science, bogosort (also known as permutation sort, stupid sort, slowsort or bozosort) is a sorting algorithm based on the generate and test paradigm. The function successively generates permutations of its input until it finds one t ...
has O(n) time when the elements are sorted on the first iteration. In each iteration all elements are checked if in order. There are n! possible permutations; with a balanced random number generator, almost each permutation of the array is yielded in n! iterations. Computers have limited memory, so the generated numbers cycle; it might not be possible to reach each permutation. In the worst case this leads to O(∞) time, an infinite loop.


Data structures

*
Linear search In computer science, a linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched. A linear search runs in at ...
on a list of ''n'' elements. In the absolute worst case, the search must visit every element once. This happens when the value being searched for is either the last element in the list, or is not in the list. However, on average, assuming the value searched for is in the list and each list element is equally likely to be the value searched for, the search visits only ''n''/2 elements.


See also

*
Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important ...
– an area where there is a great deal of performance analysis of various algorithms. * Search data structure – any data structure that allows the efficient retrieval of specific items *
Worst-case circuit analysis Worst-case circuit analysis (WCCA or WCA) is a cost-effective means of screening a design to ensure with a high degree of confidence that potential defects and deficiencies are identified and eliminated prior to and during test, production, and deli ...
*
Smoothed analysis In theoretical computer science, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable research, for problems ranging from mathematical ...
*
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
*
Big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund L ...


References

{{DEFAULTSORT:Best, Worst And Average Case Computational complexity theory Analysis of algorithms