In
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, Richardson extrapolation is a
sequence acceleration method used to improve the
rate of convergence
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of a
sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is calle ...
of estimates of some value
. In essence, given the value of
for several values of
, we can estimate
by extrapolating the estimates to
. It is named after
Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to
Christiaan Huygens in his calculation of
π. In the words of
Birkhoff and
Rota, "its usefulness for practical computations can hardly be overestimated."
[Page 126 of ]
Practical applications of Richardson extrapolation include
Romberg integration, which applies Richardson extrapolation to the
trapezoid rule
In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral.
\int_a^b f(x) \, dx.
The trapezoidal rule works by ...
, and the
Bulirsch–Stoer algorithm for solving ordinary differential equations.
Example of Richardson extrapolation
Suppose that we wish to approximate
, and we have a method
that depends on a small parameter
in such a way that
Let us define a new function
where
and
are two distinct step sizes.
Then
is called the Richardson
extrapolation
In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between know ...
of ''A''(''h''), and has a higher-order error estimate
compared to
.
Very often, it is much easier to obtain a given precision by using ''R''(''h'') rather than ''A''(''h′'') with a much smaller ''h′''. Where ''A''(''h′'') can cause problems due to limited precision (
rounding error
A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
s) and/or due to the increasing
number of calculations needed (see examples below).
General formula
Let ''
'' be an approximation of ''
''(exact value) that depends on a positive step size ''h'' with an
error
An error (from the Latin ''error'', meaning "wandering") is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin term 'errare', meaning 'to stray'.
In statistics ...
formula of the form
:
where the
are unknown constants and the
are known constants such that
.
Further
is the leading order step size behavior of the
truncation error
In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples Infinite series
A summation series for e^x is given by an infinite series such as
e^x=1+ x+ \frac + \frac ...
as
The exact value sought can be given by
:
which can be simplified with
Big O notation to be
:
Using the step sizes ''
'' and ''
'' for some constant ''
'', the two formulas for ''
'' are
and
Multiplying the second equation by ''
'' and subtracting the first equation gives
This can be solved for
to give
which can be written as
by setting
:
Therefore, by replacing
with
the
truncation error
In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples Infinite series
A summation series for e^x is given by an infinite series such as
e^x=1+ x+ \frac + \frac ...
has reduced from
to
for the same step size
. The general pattern occurs in which
is has a more accurate estimate than
when
.
By this process, we have achieved a better approximation of ''
'' by subtracting the largest term in the error which was
. This process can be repeated to remove more error terms to get even better approximations.
A general
recurrence relation
In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
can be defined for the approximations by
:
where
satisfies
:
.
The Richardson extrapolation can be considered as a linear
sequence transformation
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more ...
.
Additionally, the general formula can be used to estimate ''
'' (leading order step size behavior of
Truncation error
In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples Infinite series
A summation series for e^x is given by an infinite series such as
e^x=1+ x+ \frac + \frac ...
) when neither its value nor ''
'' is known ''a priori''. Such a technique can be useful for quantifying an unknown
rate of convergence
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
. Given approximations of ''
'' from three distinct step sizes ''
'', ''
'', and ''
'', the exact relationship
yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two O terms is approximately valid)
which can be solved numerically to estimate ''
'' for some arbitrary choices of ''
'', ''
'', and ''
''.
Example pseudocode code for Richardson extrapolation
The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE
,
with the
Trapezoidal method
In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral.
\int_a^b f(x) \, dx.
The trapezoidal rule works b ...
. In this example we halve the step size
each iteration and so in the discussion above we'd have that
. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise
to the second power and to take powers of
in the pseudocode. We want to find the value of
, which has the exact solution of
since the exact solution of the ODE is
. This pseudocode assumes that a function called
Trapezoidal(f, tStart, tEnd, h, y0)
exists which attempts to compute
y(tEnd)
by performing the trapezoidal method on the function
f
, with starting point
y0
and
tStart
and step size
h
.
Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance.
tStart = 0 % Starting time
tEnd = 5 % Ending time
f = -y^2 % The derivative of y, so y' = f(t, y(t)) = -y^2
% The solution to this ODE is y = 1/(1 + t)
y0 = 1 % The initial position (i.e. y0 = y(tStart) = y(0) = 1)
tolerance = 10^-11 % 10 digit accuracy is desired
maxRows = 20 % Don't allow the iteration to continue indefinitely
initialH = tStart - tEnd % Pick an initial step size
haveWeFoundSolution = false % Were we able to find the solution to within the desired tolerance? not yet.
h = initialH
% Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates
% Note that this will be a lower triangular matrix and that at most two rows are actually
% needed at any time in the computation.
A = zeroMatrix(maxRows, maxRows)
%Compute the top left element of the matrix. The first row of this (lower triangular) matrix has now been filled.
A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0)
% Each row of the matrix requires one call to Trapezoidal
% This loops starts by filling the second row of the matrix, since the first row was computed above
for i = 1 : maxRows - 1 % Starting at i = 1, iterate at most maxRows - 1 times
h = h/2 % Halve the previous value of h since this is the start of a new row.
% Starting filling row i+1 from the left by calling the Trapezoidal function with this new smaller step size
A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0)
for j = 1 : i % Go across this current (i+1)-th row until the diagonal is reached
% To compute A(i + 1, j + 1), which is the next Richardson extrapolate,
% use the most recently computed value (i.e. A(i + 1, j)) and the value from the
% row above it (i.e. A(i, j)).
A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1);
end
% After leaving the above inner loop, the diagonal element of row i + 1 has been computed
% This diagonal element is the latest Richardson extrapolate to be computed.
% The difference between this extrapolate and the last extrapolate of row i is a good
% indication of the error.
if (absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) % If the result is within tolerance
print("y = ", A(i + 1, i + 1)) % Display the result of the Richardson extrapolation
haveWeFoundSolution = true
break % Done, so leave the loop
end
end
if (haveWeFoundSolution false) % If we were not able to find a solution to within the desired tolerance
print("Warning: Not able to find solution to within the desired tolerance of ", tolerance);
print("The last computed extrapolate was ", A(maxRows, maxRows))
end
See also
*
Aitken's delta-squared process
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.Alexa ...
*
Takebe Kenko
*
Richardson iteration
References
*''Extrapolation Methods. Theory and Practice'' by C. Brezinski and
M. Redivo Zaglia, North-Holland, 1991.
*Ivan Dimov, Zahari Zlatev, Istvan Farago, Agnes Havasi: Richardson Extrapolation: Practical Aspects and Applications'', Walter de Gruyter GmbH & Co KG, {{ISBN, 9783110533002 (2017).
External links
Fundamental Methods of Numerical Extrapolation With Applications mit.edu
Richardson-Extrapolation*
ttps://www.mathworks.com/matlabcentral/fileexchange/24388-richardson-extrapolation Matlab codefor generic Richardson extrapolation.
Julia codefor generic Richardson extrapolation.
Numerical analysis
Asymptotic analysis
Articles with example MATLAB/Octave code