HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
and
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, an adaptive step size is used in some methods for the numerical solution of ordinary differential equations (including the special case of
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
) in order to control the errors of the method and to ensure stability properties such as
A-stability In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
. Using an adaptive stepsize is of particular importance when there is a large variation in the size of the derivative. For example, when modeling the motion of a satellite about the earth as a standard
Kepler orbit Johannes Kepler (; ; 27 December 1571 – 15 November 1630) was a German astronomer, mathematician, astrologer, natural philosopher and writer on music. He is a key figure in the 17th-century Scientific Revolution, best known for his laws ...
, a fixed time-stepping method such as the
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
may be sufficient. However things are more difficult if one wishes to model the motion of a spacecraft taking into account both the Earth and the Moon as in the
Three-body problem In physics and classical mechanics, the three-body problem is the problem of taking the initial positions and velocities (or momenta) of three point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's ...
. There, scenarios emerge where one can take large time steps when the spacecraft is far from the Earth and Moon, but if the spacecraft gets close to colliding with one of the planetary bodies, then small time steps are needed.
Romberg's method In numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f(x) \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangul ...
and Runge–Kutta–Fehlberg are examples of a numerical integration methods which use an adaptive stepsize.


Example

For simplicity, the following example uses the simplest integration method, the
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
; in practice, higher-order methods such as Runge–Kutta methods are preferred due to their superior convergence and stability properties. Consider the initial value problem : y'(t) = f(t,y(t)), \qquad y(a)=y_a where ''y'' and ''f'' may denote vectors (in which case this equation represents a system of coupled ODEs in several variables). We are given the function ''f''(''t'',''y'') and the initial conditions (''a'', ''y''''a''), and we are interested in finding the solution at ''t'' = ''b''. Let ''y''(''b'') denote the exact solution at ''b'', and let ''yb'' denote the solution that we compute. We write y_b + \varepsilon = y(b), where \varepsilon is the error in the numerical solution. For a sequence (''t''''n'') of values of ''t'', with ''t''''n'' = ''a'' + ''nh'', the Euler method gives approximations to the corresponding values of ''y''(''t''''n'') as : y_^=y_n+hf(t_n,y_n) The local truncation error of this approximation is defined by : \tau_^=y(t_) - y_^ and by
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the t ...
, it can be shown that (provided ''f'' is sufficiently smooth) the local truncation error is proportional to the square of the step size: : \tau_^=ch^2 where ''c'' is some constant of proportionality. We have marked this solution and its error with a (0). The value of ''c'' is not known to us. Let us now apply Euler's method again with a different step size to generate a second approximation to ''y''(''t''''n''+1). We get a second solution, which we label with a (1). Take the new step size to be one half of the original step size, and apply two steps of Euler's method. This second solution is presumably more accurate. Since we have to apply Euler's method twice, the local error is (in the worst case) twice the original error. : y_=y_n+\fracf(t_n,y_n) : y_^=y_+\fracf(t_,y_) : \tau_^=c\left(\frac\right)^2+c\left(\frac\right)^2=2c\left(\frac\right)^2=\fracch^2=\frac\tau_^ : y_^ + \tau_^=y(t+h) Here, we assume error factor c is constant over the interval , t+h/math>. In reality its rate of change is proportional to y^(t). Subtracting solutions gives the error estimate: : y_^-y_^ = \tau_^ This local error estimate is third order accurate. The local error estimate can be used to decide how stepsize h should be modified to achieve the desired accuracy. For example, if a local tolerance of \text is allowed, we could let ''h'' evolve like: : h \rightarrow 0.9 \times h \times \min \left(\max \left( \left(\frac\right)^, 0.3\right) ,2\right) The 0.9 is a safety factor to ensure success on the next try. The minimum and maximum are to prevent extreme changes from the previous stepsize. This should, in principle give an error of about 0.9 \times \text in the next try. If , \tau_^, < \text, we consider the step successful, and the error estimate is used to improve the solution: : y_^ = y_^ + \tau_^ This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps. This technique is called
Richardson extrapolation In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we ...
. Beginning with an initial stepsize of h=b-a, this theory facilitates our controllable integration of the ODE from point a to b, using an optimal number of steps given a local error tolerance. A drawback is that the step size may become prohibitively small, especially when using the low-order
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
. Similar methods can be developed for higher order methods, such as the 4th-order Runge–Kutta method. Also, a global error tolerance can be achieved by scaling the local error to global scope.


Embedded error estimates

Adaptive stepsize methods that use a so-called 'embedded' error estimate include the Bogacki–Shampine, Runge–Kutta–Fehlberg, Cash–Karp and Dormand–Prince methods. These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates. To illustrate the ideas of embedded method, consider the following scheme which update y_n: : y_=y_n + h_n \psi(t_n,y_n,h_n) : t_=t_n + h_n The next step h_n is predicted from the previous information h_n=g(t_n,y_n, h_). For embedded RK method, computation of \psi includes a lower order RK method \tilde. The error then can be simply written as : \textrm_n(h) = \tilde_ - y_ = h(\tilde(t_n, y_n, h_n) - \psi(t_n, y_n, h_n)) \textrm_n is the unnormalized error. To normalize it, we compare it against a user-defined tolerance, which consists of the absolute tolerance and relative tolerance: : \textrm_n = \textrm + \textrm \cdot \max(, y_n, , , y_, ) : E_n = \textrm(\textrm_n / \textrm_n) Then we compare the normalized error E_n against 1 to get the predicted h_n: : h_n = h_ (1/E_n)^ The parameter ''q'' is the order corresponding to the RK method \tilde, which has lower order. The above prediction formula is plausible in a sense that it enlarges the step if the estimated local error is smaller than the tolerance and it shrinks the step otherwise. The description given above is a simplified procedures used in the stepsize control for explicit RK solvers. A more detailed treatment can be found in Hairer's textbook.E. Hairer, S. P. Norsett G. Wanner, “Solving Ordinary Differential Equations I: Nonstiff Problems”, Sec. II. The ODE solver in many programming languages uses this procedure as the default strategy for adaptive stepsize control, which adds other engineering parameters to make the system more stable.


See also

*
Adaptive quadrature Adaptive quadrature is a numerical integration method in which the integral of a function f(x) is approximated using static quadrature rules on adaptively refined subintervals of the region of integration. Generally, adaptive algorithms are just a ...
*
Adaptive numerical differentiation In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function. Finite differences The simplest ...


References


Further reading

*William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery, ''Numerical Recipes in C'', Second Edition, CAMBRIDGE UNIVERSITY PRESS, 1992. *Kendall E. Atkinson, ''Numerical Analysis'', Second Edition, John Wiley & Sons, 1989. {{DEFAULTSORT:Adaptive Stepsize Numerical differential equations Numerical analysis