Adaptive Simpson's Method
   HOME

TheInfoList



OR:

Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
proposed by G.F. Kuncir in 1962. It is probably the first recursive adaptive algorithm for numerical integration to appear in print,For an earlier, non-recursive adaptive integrator more reminiscent of ODE solvers, see although more modern adaptive methods based on Gauss–Kronrod quadrature and
Clenshaw–Curtis quadrature Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables x = \cos ...
are now generally preferred. Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using
Simpson's rule In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads \int_a^b f(x) \, ...
. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner. The technique is usually much more efficient than composite Simpson's rule since it uses fewer function evaluations in places where the function is well-approximated by a
cubic function In mathematics, a cubic function is a function of the form f(x)=ax^3+bx^2+cx+d where the coefficients , , , and are complex numbers, and the variable takes real values, and a\neq 0. In other words, it is both a polynomial function of degree ...
.
Simpson's rule In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads \int_a^b f(x) \, ...
is an interpolatory quadrature rule which is exact when the integrand is a polynomial of degree three or lower. Using
Richardson extrapolation In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we ...
, the more accurate Simpson estimate S(a,m) + S(m,b) for six function values is combined with the less accurate estimate S(a,b) for three function values by applying the correction (a,m) + S(m,b) - S(a,b)15 . So, the obtained estimate is exact for polynomials of degree five or less.


Mathematical Procedure


Defining Terms

A criterion for determining when to stop subdividing an interval, suggested by J.N. Lyness, is , S(a,m) + S(m,b) - S(a,b), < 15 \varepsilon where ,b/math> is an interval with midpoint m = \frac, while S(a,b)\,\!, S(a,m)\,\!, and S(m,b) given by Simpson's rule are the estimates of \int_a^b f(x) \, dx, \int_a^m f(x) \, dx, and \int_m^b f(x) \, dx respectively, and \varepsilon is the desired maximum error tolerance for the interval. Note, \varepsilon_ = \varepsilon_ / 2.


Procedural Steps

To perform adaptive Simpson's method, do the following: if , S(a,m) + S(m,b) - S(a,b), < 15 \varepsilon_i, add S(a,m) and S(m,b) to the sum of Simpson's rules which are used to approximate the integral, otherwise, perform the same operation with \left, S + S - S(a,m)\ < 15 \varepsilon_ and \left, S + S - S(m,b)\ < 15 \varepsilon_ instead of , S(a,m) + S(m,b) - S(a,b), < 15 \varepsilon_i.


Numerical consideration

Some inputs will fail to converge in adaptive Simpson's method quickly, resulting in the tolerance underflowing and producing an infinite loop. Simple methods of guarding against this problem include adding a depth limitation (like in the C sample and in McKeeman), verifying that in floating-point arithmetics, or both (like Kuncir). The interval size may also approach the local
machine epsilon Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subjec ...
, giving . Lychee's 1969 paper includes a "Modification 4" that addresses this problem in a more concrete way: * Let the initial interval be . Let the original tolerance be . * For each subinterval , define , the error estimate, as Define . The original termination criteria would then become . * If the , either the round-off level have been reached or a zero for is found in the interval. A change in the tolerance ''ε''0 to ''ε′''0 is necessary. ** The recursive routines now need to return a ''D'' level for the current interval. A routine-static variable is defined and initialized to ''E''. ** (Modification 4 i, ii) If further recursion is used on an interval: **# If round-off appears to have been reached, change the ''E' '' to . **# Otherwise, adjust ''E' '' to . ** Some control of the adjustments is necessary. Significant increases and minor decreases of the tolerances should be inhibited. * To calculate the effective ''ε′''0 over the entire interval: ** Log each at which the ''E' '' is changed into an array of pairs. The first entry should be . ** The actual ''εeff'' is the arithmetic mean of all ''ε′''0, weighted by the width of the intervals. * If the current ''E' '' for an interval is higher than ''E'', then the fifth-order acceleration/correction would not apply: ** The "15" factor in the termination criteria is disabled. ** The correction term should not be used. The epsilon-raising maneuver allows the routine to be used in a "best effort" mode: given a zero initial tolerance, the routine will try to get the most precise answer and return an actual error level.


Sample code implementations

A common implementation technique shown below is passing down along with the interval . These values, used for evaluating at the parent level, will again be used for the subintervals. Doing so cuts down the cost of each recursive call from 6 to 2 evaluations of the input function. The size of the stack space used stays linear to the layer of recursions.


Python

Here is an implementation of adaptive Simpson's method in
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
. from __future__ import division # python 2 compat # "structured" adaptive version, translated from Racket def _quad_simpsons_mem(f, a, fa, b, fb): """Evaluates the Simpson's Rule, also returning m and f(m) to reuse""" m = (a + b) / 2 fm = f(m) return (m, fm, abs(b - a) / 6 * (fa + 4 * fm + fb)) def _quad_asr(f, a, fa, b, fb, eps, whole, m, fm): """ Efficient recursive implementation of adaptive Simpson's rule. Function values at the start, middle, end of the intervals are retained. """ lm, flm, left = _quad_simpsons_mem(f, a, fa, m, fm) rm, frm, right = _quad_simpsons_mem(f, m, fm, b, fb) delta = left + right - whole if abs(delta) <= 15 * eps: return left + right + delta / 15 return _quad_asr(f, a, fa, m, fm, eps/2, left , lm, flm) +\ _quad_asr(f, m, fm, b, fb, eps/2, right, rm, frm) def quad_asr(f, a, b, eps): """Integrate f from a to b using Adaptive Simpson's Rule with max error of eps.""" fa, fb = f(a), f(b) m, fm, whole = _quad_simpsons_mem(f, a, fa, b, fb) return _quad_asr(f, a, fa, b, fb, eps, whole, m, fm) from math import sin print(quad_asr(sin, 0, 1, 1e-09))


C

Here is an implementation of the adaptive Simpson's method in C99 that avoids redundant evaluations of f and quadrature computations. It includes all three "simple" defenses against numerical problems. #include // include file for fabs and sin #include // include file for printf and perror #include /** Adaptive Simpson's Rule, Recursive Core */ float adaptiveSimpsonsAux(float (*f)(float), float a, float b, float eps, float whole, float fa, float fb, float fm, int rec) /** Adaptive Simpson's Rule Wrapper * (fills in cached function evaluations) */ float adaptiveSimpsons(float (*f)(float), // function ptr to integrate float a, float b, // interval ,b float epsilon, // error tolerance int maxRecDepth) /** usage example */ #include // for the hostile example (rand function) static int callcnt = 0; static float sinfc(float x) static float frand48c(float x) int main() This implementation has been incorporated into a C++
ray tracer In 3D computer graphics, ray tracing is a technique for modeling Light transport theory, light transport for use in a wide variety of Rendering (computer graphics), rendering algorithms for generating digital image, digital images. On a spectr ...
intended for X-Ray Laser simulation at
Oak Ridge National Laboratory Oak Ridge National Laboratory (ORNL) is a U.S. multiprogram science and technology national laboratory sponsored by the U.S. Department of Energy (DOE) and administered, managed, and operated by UT–Battelle as a federally funded research and ...
, among other projects. The ORNL version has been enhanced with a call counter, templates for different datatypes, and wrappers for integrating over multiple dimensions.


Racket

Here is an implementation of the adaptive Simpson method in Racket with a behavioral software contract. The exported function computes the indeterminate integral for some given function ''f''. ;; ----------------------------------------------------------------------------- ;; interface, with contract (provide/contract daptive-simpson (->i ((f (-> real? real?)) (L real?) (R (L) (and/c real? (>/c L)))) (#:epsilon (ε real?)) (r real?)) ;; ----------------------------------------------------------------------------- ;; implementation (define (adaptive-simpson f L R #:epsilon µ .000000001 (define f@L (f L)) (define f@R (f R)) (define-values (M f@M whole) (simpson-1call-to-f f L f@L R f@R)) (asr f L f@L R f@R ε whole M f@M)) ;; the "efficient" implementation (define (asr f L f@L R f@R ε whole M f@M) (define-values (leftM f@leftM left*) (simpson-1call-to-f f L f@L M f@M)) (define-values (rightM f@rightM right*) (simpson-1call-to-f f M f@M R f@R)) (define delta* (- (+ left* right*) whole)) (cond <= (abs delta*) (* 15 ε)) (+ left* right* (/ delta* 15)) lse (define epsilon1 (/ ε 2)) (+ (asr f L f@L M f@M epsilon1 left* leftM f@leftM) (asr f M f@M R f@R epsilon1 right* rightM f@rightM))) ;; evaluate half an interval (1 func eval) (define (simpson-1call-to-f f L f@L R f@R) (define M (mid L R)) (define f@M (f M)) (values M f@M (* (/ (abs (- R L)) 6) (+ f@L (* 4 f@M) f@R)))) (define (mid L R) (/ (+ L R) 2.))


Related algorithms

* Henriksson (1961) is a non-recursive variant of Simpson's Rule. It "adapts" by integrating from left to right and adjusting the interval width as needed. * Kuncir's Algorithm 103 (1962) is the original recursive, bisecting, adaptive integrator. Algorithm 103 consists of a larger routine with a nested subroutine (loop AA), made recursive by the use of the
goto GoTo (goto, GOTO, GO TO or other case combinations, depending on the programming language) is a statement found in many computer programming languages. It performs a one-way transfer of control to another line of code; in contrast a function ca ...
statement. It guards against the underflowing of interval widths (loop BB), and aborts as soon as the user-specified eps is exceeded. The termination criteria is , S^(a,b) - S(a,b), < 2^ \epsilon \,, where is the current level of recursion and is the more accurate estimate. * McKeeman's Algorithm 145 (1962) is a similarly recursive integrator that splits the interval into three instead of two parts. The recursion is written in a more familiar manner. The 1962 algorithm, found to be over-cautious, uses , S^(a,b) - S(a,b), < 3^ \epsilon \, for termination, so a 1963 improvement uses \sqrt^ \epsilon instead. * Lyness (1969) is almost the current integrator. Created as a set of four modifications of McKeeman 1962, it replaces trisection with bisection to lower computational costs (Modifications 1+2, coinciding with the Kuncir integrator) and improves McKeeman's 1962/63 error estimates to the fifth order (Modification 3), in a way related to
Boole's rule In mathematics, Boole's rule, named after George Boole, is a method of numerical integration. Formula Simple Boole's Rule It approximates an integral: : \int_^ f(x)\,dx by using the values of at five equally spaced points: : \begin & x_0 ...
and
Romberg's method In numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f(x) \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangul ...
. Modification 4, not implemented here, contains provisions for roundoff error that allows for raising the ε to the minimum allowed by current precision and returning the new error.


Notes


Bibliography

{{reflist


External links


Module for Adaptive Simpson's Rule
Numerical integration (quadrature)