HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includi ...
, amortized analysis is a method for
analyzing Analysis (plural, : analyses) is the process of breaking a complexity, complex topic or Substance theory, substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics a ...
a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to
execute Execute, in capital punishment, is to put someone to death. Execute may also refer to: * Execution (computing), the running of a computer program * ''Execute'' (album), a 2001 Garage hip-hop album by Oxide & Neutrino * USS ''Execute'' (AM-232), a ...
. The motivation for amortized analysis is that looking at the worst-case run time can be too pessimistic. Instead, amortized analysis averages the running times of operations in a sequence over that sequence. As a conclusion: "Amortized analysis is a useful tool that complements other techniques such as
worst-case In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...
and average-case analysis." For a given operation of an algorithm, certain situations (e.g., input parametrizations or data structure contents) may imply a significant cost in resources, whereas other situations may not be as costly. The amortized analysis considers both the costly and less costly operations together over the whole sequence of operations. This may include accounting for different types of input, length of the input, and other factors that affect its performance.


History

Amortized analysis initially emerged from a method called aggregate analysis, which is now subsumed by amortized analysis. The technique was first formally introduced by
Robert Tarjan Robert Endre Tarjan (born April 30, 1948) is an American computer scientist and mathematician. He is the discoverer of several graph algorithms, including Tarjan's off-line lowest common ancestors algorithm, and co-inventor of both splay trees ...
in his 1985 paper ''Amortized Computational Complexity'', which addressed the need for a more useful form of analysis than the common probabilistic methods used. Amortization was initially used for very specific types of algorithms, particularly those involving binary trees and
union Union commonly refers to: * Trade union, an organization of workers * Union (set theory), in mathematics, a fundamental operation on sets Union may also refer to: Arts and entertainment Music * Union (band), an American rock group ** ''Un ...
operations. However, it is now ubiquitous and comes into play when analyzing many other algorithms as well.


Method

Amortized analysis requires knowledge of which series of operations are possible. This is most commonly the case with data structures, which have
state State may refer to: Arts, entertainment, and media Literature * ''State Magazine'', a monthly magazine published by the U.S. Department of State * ''The State'' (newspaper), a daily newspaper in Columbia, South Carolina, United States * ''Our S ...
that persists between operations. The basic idea is that a worst-case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost. There are generally three methods for performing amortized analysis: the aggregate method, the
accounting method A basis of accounting is the time various financial transactions are recorded. The cash basis (EU VAT vocabulary ''cash accounting'') and the accrual basis are the two primary methods of tracking income and expenses in accounting. Both can ...
, and the
potential method In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost of infrequent but ...
. All of these give correct answers; the choice of which to use depends on which is most convenient for a particular situation. *Aggregate analysis determines the upper bound ''T''(''n'') on the total cost of a sequence of ''n'' operations, then calculates the amortized cost to be ''T''(''n'') / ''n''. *The
accounting method A basis of accounting is the time various financial transactions are recorded. The cash basis (EU VAT vocabulary ''cash accounting'') and the accrual basis are the two primary methods of tracking income and expenses in accounting. Both can ...
is a form of aggregate analysis which assigns to each operation an ''amortized cost'' which may differ from its actual cost. Early operations have an amortized cost higher than their actual cost, which accumulates a saved "credit" that pays for later operations having an amortized cost lower than their actual cost. Because the credit begins at zero, the actual cost of a sequence of operations equals the amortized cost minus the accumulated credit. Because the credit is required to be non-negative, the amortized cost is an upper bound on the actual cost. Usually, many short-running operations accumulate such credit in small increments, while rare long-running operations decrease it drastically. *The
potential method In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost of infrequent but ...
is a form of the accounting method where the saved credit is computed as a function (the "potential") of the state of the data structure. The amortized cost is the immediate cost plus the change in potential.


Examples


Dynamic array

Consider a dynamic array that grows in size as more elements are added to it, such as in Java or in C++. If we started out with a dynamic array of size 4, we could push 4 elements onto it, and each operation would take
constant time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
. Yet pushing a fifth element onto that array would take longer as the array would have to create a new array of double the current size (8), copy the old elements onto the new array, and then add the new element. The next three push operations would similarly take constant time, and then the subsequent addition would require another slow doubling of the array size. In general if we consider an arbitrary number of pushes ''n'' + 1 to an array of size ''n'', we notice that push operations take constant time except for the last one which takes time to perform the size doubling operation. Since there were ''n'' + 1 operations total we can take the average of this and find that pushing elements onto the dynamic array takes: \tfrac = \Theta(1), constant time.


Queue

Shown is a Ruby implementation of a
Queue __NOTOC__ Queue () may refer to: * Queue area, or queue, a line or area where people wait for goods or services Arts, entertainment, and media *''ACM Queue'', a computer magazine * ''The Queue'' (Sorokin novel), a 1983 novel by Russian author ...
, a FIFO data structure: class Queue def initialize @input = [] @output = [] end def enqueue(element) @input << element end def dequeue if @output.empty? while @input.any? @output << @input.pop end end @output.pop end end The enqueue operation just pushes an element onto the input array; this operation does not depend on the lengths of either input or output and therefore runs in constant time. However the dequeue operation is more complicated. If the output array already has some elements in it, then dequeue runs in constant time; otherwise, dequeue takes time to add all the elements onto the output array from the input array, where ''n'' is the current length of the input array. After copying ''n'' elements from input, we can perform ''n'' dequeue operations, each taking constant time, before the output array is empty again. Thus, we can perform a sequence of ''n'' dequeue operations in only time, which implies that the amortized time of each dequeue operation is . Alternatively, we can charge the cost of copying any item from the input array to the output array to the earlier enqueue operation for that item. This charging scheme doubles the amortized time for enqueue but reduces the amortized time for dequeue to .


Common use

* In common usage, an "amortized algorithm" is one that an amortized analysis has shown to perform well. *
Online algorithm In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an o ...
s commonly use amortized analysis.


References


Literature

* * {{Use dmy dates, date=June 2019 Analysis of algorithms Articles with example Ruby code