Aitken Method
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, Aitken's delta-squared process or Aitken extrapolation is a
series acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the ...
method, used for accelerating the
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of co ...
of a sequence. It is named after
Alexander Aitken Alexander Craig "Alec" Aitken (1 April 1895 – 3 November 1967) was one of New Zealand's most eminent mathematicians. In a 1935 paper he introduced the concept of generalized least squares, along with now standard vector/matrix notation fo ...
, who introduced this method in 1926.Alexander Aitken, "On Bernoulli's numerical solution of algebraic equations", ''Proceedings of the Royal Society of Edinburgh'' (1926) 46 pp. 289–305. Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.


Definition

Given a sequence X = _, one associates with this sequence the new sequence :A X=_, which can, with improved
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
, also be written as : (A X)_n = x_n-\frac, or equivalently as :(A X)_n = x_ - \frac = x_ - \frac where :\Delta x_=,\ \Delta x_=, and :\Delta^2 x_n=x_n -2x_ + x_=\Delta x_-\Delta x_,\ for n = 0, 1, 2, 3, \dots \, Obviously, A X is ill-defined if \Delta^2 x contains a zero element, or equivalently, if the sequence of
first difference In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a paramete ...
s has a repeating term. From a theoretical point of view, if that occurs only for a finite number of indices, one could easily agree to consider the sequence A X restricted to indices n > n_0 with a sufficiently large n_0 . From a practical point of view, one does in general rather consider only the first few terms of the sequence, which usually provide the needed precision. Moreover, when numerically computing the sequence, one has to take care to stop the computation when
rounding error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
s in the denominator become too large, where the Δ² operation may cancel too many
significant digit Significant figures (also known as the significant digits, ''precision'' or ''resolution'') of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something. If a number expres ...
s. (It would be better for numerical calculation to use \Delta x_ - \Delta x_\ = (x_-x_)-(x_-x_)\ rather than x_n - 2x_ + x_\ .)


Properties

Aitken's delta-squared process is a method of acceleration of convergence, and a particular case of a nonlinear
sequence transformation In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more ...
. Convergence of \ \_^\infty\ to limit \ \ell\ is called "linear" if there is some number (0, 1) for which : \lim_ \frac = \mu\ . Which means that the distance between the sequence and its limit shrinks by nearly the same proportion on every step, and that rate of reduction becomes closer to being constant with every step. (This is also called "geometric convergence"; this form of convergence is common for
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''an'' represents the coefficient of the ''n''th term and ''c'' is a const ...
.) Aitken's method will accelerate the sequence x_n if \ \lim_\frac=0\ . A is not a linear operator, but a constant term drops out, viz: \ A -\ell= Ax - \ell\ , if \ell is a constant. This is clear from the expression of \ Ax\ in terms of the
finite difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
operator \ \Delta\ . Although the new process does not in general converge quadratically, it can be shown that for a fixed point process, that is, for an
iterated function In mathematics, an iterated function is a function (that is, a function from some set to itself) which is obtained by composing another function with itself a certain number of times. The process of repeatedly applying the same function is ...
sequence \ x_=f(x_n)\ for some function \ f\ , converging to a fixed point, the convergence is quadratic. In this case, the technique is known as
Steffensen's method In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's me ...
. Empirically, the ''A''-operation eliminates the "most important error term". One can check this by considering a sequence of the form x_n=\ell+a^n+b^n, where 0: The sequence Ax will then go to the limit like b^n goes to zero. Geometrically, the graph of an exponential function f(t) that satisfies f(n)=x_n, f(n+1)=x_ and f(n+2)=x_ has an horizontal asymptote at \frac (if x_-2x_+x_ \neq 0). One can also show that if x goes to its limit \ell at a rate strictly greater than 1, Ax does not have a better rate of convergence. (In practice, one rarely has e.g. quadratic convergence which would mean over 30 resp. 100 correct decimal places after 5 resp. 7 iterations (starting with 1 correct digit); usually no acceleration is needed in that case.) In practice, Ax converges much faster to the limit than x does, as demonstrated by the example calculations below. Usually, it is much cheaper to calculate Ax (involving only calculation of differences, one multiplication and one division) than to calculate many more terms of the sequence x. Care must be taken, however, to avoid introducing errors due to insufficient precision when calculating the ''differences'' in the numerator and denominator of the expression.


Example calculations

Example 1: The value of \sqrt \approx 1.4142136 can be approximated by assuming an initial value for a_0 and iterating the following: ::a_ = \frac. Starting with a_0 = 1: It is worth noting here that Aitken's method does not save two iteration steps; computation of the first three ''Ax'' values required the first five ''x'' values. Also, the second Ax value is decidedly inferior to the 4th x value, mostly due to the fact that Aitken's process assumes linear, rather than quadratic, convergence. Example 2: The value of \frac may be calculated as an infinite sum: :: \frac = \sum_^\infty \frac \approx 0.785398 In this example, Aitken's method is applied to a sublinearly converging series, accelerating convergence considerably. It is still sublinear, but much faster than the original convergence: the first Ax value, whose computation required the first three x values, is closer to the limit than the eighth x value.


Example pseudocode for Aitken extrapolation

The following is an example of using the Aitken extrapolation to help find the limit of the sequence x_ = f(x_n) when given some initial x_0, where the limit of this sequence is assumed to be a fixed point f (say \alpha = f(\alpha)). For instance, if the sequence is given by x_ = \frac\left(x_n + \frac\right) with starting point x_0 = 1, then the function will be f(x) := \frac\left(x + \frac\right), which has \alpha := \sqrt as a fixed point (see
Methods of computing square roots Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fin ...
); it is this fixed point whose value will be approximated. This pseudo code also computes the Aitken approximation to f^(\alpha). The Aitken extrapolates will be denoted by aitkenX. During the computation of the extrapolate, it is important to check if the denominator becomes too small, which could happen if we already have a large amount of accuracy; without this check, a large amount of error could be introduced by the division. This small number will be denoted by epsilon. Because the binary representation of the fixed point could be infinite (or at least too large to fit in the available memory), the calculation will stop once the approximation is within tolerance of the true value. %These choices depend on the problem being solved x0 = 1 %The initial value f(x) = (1/2)*(x + 2/x) %The function that finds the next element in the sequence tolerance = 10^-10 %10 digit accuracy is desired epsilon = 10^-16 %Do not divide by a number smaller than this maxIterations = 20 %Do not allow the iterations to continue indefinitely haveWeFoundSolution = false %Were we able to find the solution to within the desired tolerance? not yet for i = 1 : maxIterations x1 = f(x0) x2 = f(x1) if (x1 ~= x0) lambda = absoluteValue((x2 - x1)/(x1 - x0)) %OPTIONAL: Computes an approximation of , f'(fixedPoint), , which is denoted by lambda end denominator = (x2 - x1) - (x1 - x0); if (absoluteValue(denominator) < epsilon) %To avoid greatly increasing error, do not divide by too small of a number print('WARNING: denominator is too small') break; %Leave the loop end aitkenX = x2 - ( (x2 - x1)^2 )/denominator if (absoluteValue(aitkenX - x2) < tolerance) %If the value is within tolerance print("The fixed point is ", aitkenX)) %Display the result of the Aitken extrapolation haveWeFoundSolution = true break; %Done, so leave the loop end x0 = aitkenX %Update x0 to start again end if (haveWeFoundSolution

false) %If we were not able to find a solution to within the desired tolerance print("Warning: Not able to find solution to within the desired tolerance of ", tolerance) print("The last computed extrapolate was ", aitkenX) end


See also

*
Rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of co ...
*
Limit of a sequence As the positive integer n becomes larger and larger, the value n\cdot \sin\left(\tfrac1\right) becomes arbitrarily close to 1. We say that "the limit of the sequence n\cdot \sin\left(\tfrac1\right) equals 1." In mathematics, the limit ...
*
Fixed point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point iterat ...
*
Richardson extrapolation In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we ...
*
Sequence transformation In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more ...
*
Shanks transformation In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was fi ...
*
Steffensen's method In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's me ...


Notes


References

* William H. Press, ''et al.'', ''Numerical Recipes in C'', (1987) Cambridge University Press, ''(Se
section 5.1
'' * Abramowitz and Stegun, '' Handbook of Mathematical Functions'', section 3.9.7 * Kendall E. Atkinson, ''An Introduction to Numerical Analysis'', (1989) John Wiley & Sons, Inc, {{DEFAULTSORT:Aitken's Delta-Squared Process Numerical analysis