HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, Romberg's method is used to estimate the
definite integral In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus,Int ...
\int_a^b f(x) \, dx by applying
Richardson extrapolation In numerical analysis, Richardson extrapolation is a Series acceleration, sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for se ...
repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a
triangular array In mathematics and computing, a triangular array of numbers, polynomials, or the like, is a doubly indexed sequence in which each row is only as long as the row's own index. That is, the ''i''th row contains only ''i'' elements. Examples Notable ...
. Romberg's method is a Newton–Cotes formula – it evaluates the integrand at equally spaced points. The integrand must have continuous derivatives, though fairly good results may be obtained if only a few derivatives exist. If it is possible to evaluate the integrand at unequally spaced points, then other methods such as
Gaussian quadrature In numerical analysis, an -point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree or less by a suitable choice of the nodes and weights for . Th ...
and Clenshaw–Curtis quadrature are generally more accurate. The method is named after Werner Romberg, who published the method in 1955.


Method

Using h_n = \frac, the method can be inductively defined by \begin R(0,0) &= h_0 (f(a) + f(b)) \\ R(n,0) &= \tfrac R(n-1,0) + 2h_n \sum_^ f(a + (2k-1)h_) \\ R(n,m) &= R(n,m-1) + \tfrac (R(n,m-1) - R(n-1,m-1)) \\ &= \frac ( 4^m R(n,m-1) - R(n-1, m-1)) \end where n \ge m and m \ge 1 \, . In
big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ...
, the error for ''R''(''n'', ''m'') is: O\left(h_n^\right). The zeroeth extrapolation, , is equivalent to the trapezoidal rule with points; the first extrapolation, , is equivalent to Simpson's rule with points. The second extrapolation, , is equivalent to Boole's rule with points. The further extrapolations differ from Newton-Cotes formulas. In particular further Romberg extrapolations expand on Boole's rule in very slight ways, modifying weights into ratios similar as in Boole's rule. In contrast, further Newton-Cotes methods produce increasingly differing weights, eventually leading to large positive and negative weights. This is indicative of how large degree interpolating polynomial Newton-Cotes methods fail to converge for many integrals, while Romberg integration is more stable. By labelling our O(h^2) approximations as A_0\big(\frac\big) instead of R(n,0), we can perform Richardson extrapolation with the error formula defined below: \int_a^b f(x) \, dx = A_0\bigg(\frac\bigg)+a_0\bigg(\frac\bigg)^ + a_1\bigg(\frac\bigg)^ + a_2\bigg(\frac\bigg)^ + \cdots Once we have obtained our O(h^) approximations A_m\big(\frac\big), we can label them as R(n,m). When function evaluations are expensive, it may be preferable to replace the polynomial interpolation of Richardson with the rational interpolation proposed by .


A geometric example

To estimate the area under a curve the trapezoid rule is applied first to one-piece, then two, then four, and so on. After trapezoid rule estimates are obtained,
Richardson extrapolation In numerical analysis, Richardson extrapolation is a Series acceleration, sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for se ...
is applied. *For the first iteration the two piece and one piece estimates are used in the formula . The same formula is then used to compare the four piece and the two piece estimate, and likewise for the higher estimates *For the second iteration the values of the first iteration are used in the formula *The third iteration uses the next power of 4: on the values derived by the second iteration. *The pattern is continued until there is one estimate.


Example

As an example, the
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function (mathematics), function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real number, rea ...
is integrated from 0 to 1, i.e. the
error function In mathematics, the error function (also called the Gauss error function), often denoted by , is a function \mathrm: \mathbb \to \mathbb defined as: \operatorname z = \frac\int_0^z e^\,\mathrm dt. The integral here is a complex Contour integrat ...
erf(1) ≈ 0.842700792949715. The triangular array is calculated row by row and calculation is terminated if the two last entries in the last row differ less than 10−8. 0.77174333 0.82526296 0.84310283 0.83836778 0.84273605 0.84271160 0.84161922 0.84270304 0.84270083 0.84270066 0.84243051 0.84270093 0.84270079 0.84270079 0.84270079 The result in the lower right corner of the triangular array is accurate to the digits shown. It is remarkable that this result is derived from the less accurate approximations obtained by the trapezium rule in the first column of the triangular array.


Implementation

Here is an example of a computer implementation of the Romberg method (in the
C programming language C (''pronounced'' '' – like the letter c'') is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of ...
): #include #include void print_row(size_t i, double *R) /* INPUT: (*f) : pointer to the function to be integrated a : lower limit b : upper limit max_steps: maximum steps of the procedure acc : desired accuracy OUTPUT: Rp ax_steps-1 approximate value of the integral of the function f for x in ,bwith accuracy 'acc' and steps 'max_steps'. */ double romberg(double (*f)(double), double a, double b, size_t max_steps, double acc) Here is an implementation of the Romberg method (in the
Python programming language Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple prog ...
): def print_row(i, R): """Prints a row of the Romberg table.""" print(f"R[] = ", end="") for j in range(i + 1): print(f" ", end="") print() def romberg(f, a, b, max_steps, acc): """ Calculates the integral of a function using Romberg integration. Args: f: The function to integrate. a: Lower limit of integration. b: Upper limit of integration. max_steps: Maximum number of steps. acc: Desired accuracy. Returns: The approximate value of the integral. """ R1, R2 = * max_steps, * max_steps # Buffers for storing rows Rp, Rc = R1, R2 # Pointers to previous and current rows h = b - a # Step size Rp = 0.5 * h * (f(a) + f(b)) # First trapezoidal step print_row(0, Rp) for i in range(1, max_steps): h /= 2.0 c = 0 ep = 2 ** (i - 1) for j in range(1, ep + 1): c += f(a + (2 * j - 1) * h) Rc = h * c + 0.5 * Rp # R(i,0) for j in range(1, i + 1): n_k = 4**j Rc = (n_k * Rc - 1- Rp - 1 / (n_k - 1) # Compute R(i,j) # Print ith row of R, R ,iis the best estimate so far print_row(i, Rc) if i > 1 and abs(Rp - 1- Rc < acc: return Rc # Swap Rn and Rc for next iteration Rp, Rc = Rc, Rp return Rp ax_steps - 1 # Return our best guess


References


Citations


Bibliography

* * * * * * *


External links


ROMBINT
– code for
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
(author: Martin Kacenak)
Free online integration tool using Romberg, Fox–Romberg, Gauss–Legendre and other numerical methodsRomberg.jl
Julia implementation (supporting arbitrary factorizations, ''not'' just 2^n+1 points) {{Numerical integration Numerical integration Articles with example C code