HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, Richardson extrapolation is a sequence acceleration method used to improve the
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of a
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is calle ...
of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we can estimate A^\ast by extrapolating the estimates to h=0. It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to Christiaan Huygens in his calculation of π. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated."Page 126 of Practical applications of Richardson extrapolation include Romberg integration, which applies Richardson extrapolation to the
trapezoid rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works by ...
, and the Bulirsch–Stoer algorithm for solving ordinary differential equations.


Example of Richardson extrapolation

Suppose that we wish to approximate A^*, and we have a method A(h) that depends on a small parameter h in such a way that A(h) = A^\ast + C h^n + O(h^). Let us define a new function R(h,t) := \frac where h and \frac are two distinct step sizes. Then R(h, t) = \frac = A^* + O(h^). R(h,t) is called the Richardson
extrapolation In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between know ...
of ''A''(''h''), and has a higher-order error estimate O(h^) compared to A(h) . Very often, it is much easier to obtain a given precision by using ''R''(''h'') rather than ''A''(''h′'') with a much smaller ''h′''. Where ''A''(''h′'') can cause problems due to limited precision (
rounding error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
s) and/or due to the increasing number of calculations needed (see examples below).


General formula

Let ''A_0(h)'' be an approximation of ''A^*''(exact value) that depends on a positive step size ''h'' with an
error An error (from the Latin ''error'', meaning "wandering") is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin term 'errare', meaning 'to stray'. In statistics ...
formula of the form : A^* = A_0(h)+a_0h^ + a_1h^ + a_2h^ + \cdots where the a_i are unknown constants and the k_i are known constants such that h^>h^. Further k_0 is the leading order step size behavior of the
truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. Examples Infinite series A summation series for e^x is given by an infinite series such as e^x=1+ x+ \frac + \frac ...
as A^*=A_0(h)+O(h^). The exact value sought can be given by : A^* = A_0(h) + a_0h^ + a_1h^ + a_2h^ + \cdots which can be simplified with Big O notation to be : A^* = A_0(h)+ a_0h^ + O(h^). \,\! Using the step sizes ''h'' and ''h / t'' for some constant ''t'', the two formulas for ''A^*'' are A^* = A_0(h)+ a_0h^ + a_1h^ + a_2h^ + O(h^) \,\!and A^* = A_0\!\left(\frac\right) + a_0\left(\frac\right)^ + a_1\left(\frac\right)^ + a_2\left(\frac\right)^ + O(h^) .Multiplying the second equation by ''t^'' and subtracting the first equation gives (t^-1)A^* = \bigg ^A_0\left(\frac\right) - A_0(h)\bigg+ \bigg(t^a_1\bigg(\frac\bigg)^-a_1h^\bigg)+ \bigg(t^a_2\bigg(\frac\bigg)^-a_2h^\bigg) + O(h^). This can be solved for A^* to giveA^* = \frac + \frac +\frac +O(h^) which can be written as A^* = A_2(h)+O(k^) by setting :A_1(h) = \frac . Therefore, by replacing A_0(h) with A_1(h) the
truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. Examples Infinite series A summation series for e^x is given by an infinite series such as e^x=1+ x+ \frac + \frac ...
has reduced from O(h^) to O(h^) for the same step size h. The general pattern occurs in which A_i(h) is has a more accurate estimate than A_j(h) when i>j. By this process, we have achieved a better approximation of ''A^*'' by subtracting the largest term in the error which was O(h^) . This process can be repeated to remove more error terms to get even better approximations. A general
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
can be defined for the approximations by : A_(h) = \frac where k_ satisfies : A^* = A_(h) + O(h^) . The Richardson extrapolation can be considered as a linear
sequence transformation In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more ...
. Additionally, the general formula can be used to estimate ''k_0'' (leading order step size behavior of
Truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. Examples Infinite series A summation series for e^x is given by an infinite series such as e^x=1+ x+ \frac + \frac ...
) when neither its value nor ''A^*'' is known ''a priori''. Such a technique can be useful for quantifying an unknown
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
. Given approximations of ''A^*'' from three distinct step sizes ''h'', ''h / t'', and ''h / s'', the exact relationshipA^*=\frac + O(h^) = \frac + O(h^)yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two O terms is approximately valid)A_i\left(\frac\right) + \frac \approx A_i\left(\frac\right) +\fracwhich can be solved numerically to estimate ''k_0'' for some arbitrary choices of ''h'', ''s'', and ''t''.


Example pseudocode code for Richardson extrapolation

The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE y'(t) = -y^2, y(0) = 1 with the
Trapezoidal method In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works b ...
. In this example we halve the step size h each iteration and so in the discussion above we'd have that t = 2. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise t to the second power and to take powers of 4 = 2^2 = t^2 in the pseudocode. We want to find the value of y(5), which has the exact solution of \frac = \frac = 0.1666... since the exact solution of the ODE is y(t) = \frac. This pseudocode assumes that a function called Trapezoidal(f, tStart, tEnd, h, y0) exists which attempts to compute y(tEnd) by performing the trapezoidal method on the function f, with starting point y0 and tStart and step size h. Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance. tStart = 0 % Starting time tEnd = 5 % Ending time f = -y^2 % The derivative of y, so y' = f(t, y(t)) = -y^2 % The solution to this ODE is y = 1/(1 + t) y0 = 1 % The initial position (i.e. y0 = y(tStart) = y(0) = 1) tolerance = 10^-11 % 10 digit accuracy is desired maxRows = 20 % Don't allow the iteration to continue indefinitely initialH = tStart - tEnd % Pick an initial step size haveWeFoundSolution = false % Were we able to find the solution to within the desired tolerance? not yet. h = initialH % Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates % Note that this will be a lower triangular matrix and that at most two rows are actually % needed at any time in the computation. A = zeroMatrix(maxRows, maxRows) %Compute the top left element of the matrix. The first row of this (lower triangular) matrix has now been filled. A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Each row of the matrix requires one call to Trapezoidal % This loops starts by filling the second row of the matrix, since the first row was computed above for i = 1 : maxRows - 1 % Starting at i = 1, iterate at most maxRows - 1 times h = h/2 % Halve the previous value of h since this is the start of a new row. % Starting filling row i+1 from the left by calling the Trapezoidal function with this new smaller step size A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) for j = 1 : i % Go across this current (i+1)-th row until the diagonal is reached % To compute A(i + 1, j + 1), which is the next Richardson extrapolate, % use the most recently computed value (i.e. A(i + 1, j)) and the value from the % row above it (i.e. A(i, j)). A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1); end % After leaving the above inner loop, the diagonal element of row i + 1 has been computed % This diagonal element is the latest Richardson extrapolate to be computed. % The difference between this extrapolate and the last extrapolate of row i is a good % indication of the error. if (absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) % If the result is within tolerance print("y = ", A(i + 1, i + 1)) % Display the result of the Richardson extrapolation haveWeFoundSolution = true break % Done, so leave the loop end end if (haveWeFoundSolution

false) % If we were not able to find a solution to within the desired tolerance print("Warning: Not able to find solution to within the desired tolerance of ", tolerance); print("The last computed extrapolate was ", A(maxRows, maxRows)) end


See also

*
Aitken's delta-squared process In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.Alexa ...
* Takebe Kenko * Richardson iteration


References

*''Extrapolation Methods. Theory and Practice'' by C. Brezinski and M. Redivo Zaglia, North-Holland, 1991. *Ivan Dimov, Zahari Zlatev, Istvan Farago, Agnes Havasi: Richardson Extrapolation: Practical Aspects and Applications'', Walter de Gruyter GmbH & Co KG, {{ISBN, 9783110533002 (2017).


External links


Fundamental Methods of Numerical Extrapolation With Applications
mit.edu
Richardson-Extrapolation
* ttps://www.mathworks.com/matlabcentral/fileexchange/24388-richardson-extrapolation Matlab codefor generic Richardson extrapolation.
Julia code
for generic Richardson extrapolation. Numerical analysis Asymptotic analysis Articles with example MATLAB/Octave code