In
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, Aitken's delta-squared process or Aitken extrapolation is a
series acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the ...
method, used for accelerating the
rate of convergence
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of a sequence. It is named after
Alexander Aitken
Alexander Craig "Alec" Aitken (1 April 1895 – 3 November 1967) was one of New Zealand's most eminent mathematicians. In a 1935 paper he introduced the concept of generalized least squares, along with now standard vector/matrix notation fo ...
, who introduced this method in 1926.
[Alexander Aitken, "On Bernoulli's numerical solution of algebraic equations", ''Proceedings of the Royal Society of Edinburgh'' (1926) 46 pp. 289–305.] Its early form was known to
Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.
Definition
Given a sequence
, one associates with this sequence the new sequence
:
which can, with improved
numerical stability
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
, also be written as
:
or equivalently as
:
where
:
and
:
for
Obviously,
is ill-defined if
contains a zero element, or equivalently, if the sequence of
first difference
In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a paramete ...
s has a repeating term.
From a theoretical point of view, if that occurs only for a finite number of indices, one could easily agree to consider the sequence
restricted to indices
with a sufficiently large
. From a practical point of view, one does in general rather consider only the first few terms of the sequence, which usually provide the needed precision. Moreover, when numerically computing the sequence, one has to take care to stop the computation when
rounding error
A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
s in the denominator become too large, where the Δ² operation may cancel too many
significant digit
Significant figures (also known as the significant digits, ''precision'' or ''resolution'') of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something.
If a number expres ...
s. (It would be better for numerical calculation to use
rather than
.)
Properties
Aitken's delta-squared process is a method of
acceleration of convergence, and a particular case of a nonlinear
sequence transformation
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more ...
.
Convergence of
to limit
is called
"linear" if there is some number (0, 1) for which
:
Which means that the distance between the sequence and its limit shrinks by nearly the same proportion on every step, and that rate of reduction becomes closer to being constant with every step. (This is also called "geometric convergence"; this form of convergence is common for
power series
In mathematics, a power series (in one variable) is an infinite series of the form
\sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots
where ''an'' represents the coefficient of the ''n''th term and ''c'' is a const ...
.)
Aitken's method will accelerate the sequence
if
is not a linear operator, but a constant term drops out, viz:
if
is a constant. This is clear from the expression of
in terms of the
finite difference
A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
operator
Although the new process does not in general converge quadratically, it can be shown that for a
fixed point process, that is, for an
iterated function
In mathematics, an iterated function is a function (that is, a function from some set to itself) which is obtained by composing another function with itself a certain number of times. The process of repeatedly applying the same function is ...
sequence
for some function
converging to a fixed point, the convergence is quadratic. In this case, the technique is known as
Steffensen's method.
Empirically, the ''A''-operation eliminates the "most important error term". One can check this by considering a sequence of the form
, where