Zero-stability
   HOME

TheInfoList



OR:

Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as
Euler's method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
) refer to only one previous point and its derivative to determine the current value. Methods such as Runge–Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of ''linear'' multistep methods, a linear combination of the previous points and derivative values is used.


Definitions

Numerical methods for ordinary differential equations approximate solutions to
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or ...
s of the form y' = f(t,y), \quad y(t_0) = y_0. The result is approximations for the value of y(t) at discrete times t_i : y_i \approx y(t_i) \quad\text\quad t_i = t_0 + i h, where h is the time step (sometimes referred to as \Delta t ) and i is an integer. Multistep methods use information from the previous s steps to calculate the next value. In particular, a ''linear'' multistep method uses a linear combination of y_i and f(t_i,y_i) to calculate the value of y for the desired current step. Thus, a linear multistep method is a method of the form \begin & y_ + a_ \cdot y_ + a_ \cdot y_ + \cdots + a_0 \cdot y_n \\ & \qquad = h\cdot\left( b_s \cdot f(t_,y_) + b_ \cdot f(t_,y_) + \cdots + b_0 \cdot f(t_n,y_n) \right) \\ & \Leftrightarrow \sum_^s a_jy_ = h\sum_^sb_jf(t_,y_), \end with a_s=1. The coefficients a_0, \dotsc, a_ and b_0, \dotsc, b_s determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method. One can distinguish between
explicit and implicit methods Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical pro ...
. If b_s = 0 , then the method is called "explicit", since the formula can directly compute y_ . If b_s \ne 0 then the method is called "implicit", since the value of y_ depends on the value of f(t_, y_) , and the equation must be solved for y_ .
Iterative methods In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
such as
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
are often used to solve the implicit formula. Sometimes an explicit multistep method is used to "predict" the value of y_ . That value is then used in an implicit formula to "correct" the value. The result is a
predictor–corrector method In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equationsto find an unknown function that satisfies a given differential equation. All such algorithms proceed in two s ...
.


Examples

Consider for an example the problem y' = f(t,y)=y, \quad y(0) = 1. The exact solution is y(t) = e^t .


One-step Euler

A simple numerical method is Euler's method: y_ = y_n + hf(t_n, y_n). Euler's method can be viewed as an explicit multistep method for the degenerate case of one step. This method, applied with step size h = \tfrac on the problem y' = y , gives the following results: \begin y_1 &= y_0 + hf(t_0, y_0) = 1 + \tfrac \cdot 1 = 1.5, \\ y_2 &= y_1 + hf(t_1, y_1) = 1.5 + \tfrac \cdot 1.5 = 2.25, \\ y_3 &= y_2 + hf(t_2, y_2) = 2.25 + \tfrac \cdot 2.25 = 3.375, \\ y_4 &= y_3 + hf(t_3, y_3) = 3.375 + \tfrac \cdot 3.375 = 5.0625. \end


Two-step Adams–Bashforth

Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method y_ = y_ + \tfrac hf(t_,y_) - \tfrac hf(t_n,y_n). This method needs two values, y_ and y_n , to compute the next value, y_ . However, the initial value problem provides only one value, y_0 = 1 . One possibility to resolve this issue is to use the y_1 computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits): \begin y_2 &= y_1 + \tfrac 3 2 hf(t_1, y_1) - \tfrac 1 2 hf(t_0, y_0) = 1.5 + \tfrac 3 2 \cdot \tfrac 1 2 \cdot 1.5 - \tfrac 1 2 \cdot \tfrac 1 2 \cdot 1 = 2.375, \\ y_3 &= y_2 + \tfrac 3 2 hf(t_2, y_2) - \tfrac 1 2 hf(t_1, y_1) = 2.375 + \tfrac 3 2 \cdot \tfrac 1 2 \cdot 2.375 - \tfrac 1 2 \cdot \tfrac 1 2 \cdot 1.5 = 3.7812, \\ y_4 &= y_3 + \tfrac 3 2 hf(t_3, y_3) - \tfrac 1 2 hf(t_2, y_2) = 3.7812 + \tfrac 3 2 \cdot \tfrac 1 2 \cdot 3.7812 - \tfrac 1 2 \cdot \tfrac 1 2 \cdot 2.375 = 6.0234. \end The exact solution at t = t_4 = 2 is e^2 = 7.3891\ldots , so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough.


Families of multistep methods

Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the
backward differentiation formula The backward differentiation formula (BDF) is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that ...
s (BDFs).


Adams–Bashforth methods

The Adams–Bashforth methods are explicit methods. The coefficients are a_=-1 and a_ = \cdots = a_0 = 0 , while the b_j are chosen such that the methods have order ''s'' (this determines the methods uniquely). The Adams–Bashforth methods with ''s'' = 1, 2, 3, 4, 5 are (; ): \begin y_ &= y_n + hf(t_n, y_n) , \qquad\text \\ y_ &= y_ + h\left( \fracf(t_, y_) - \fracf(t_n, y_n) \right) , \\ y_ &= y_ + h\left( \frac f(t_, y_) - \frac f(t_, y_) + \fracf(t_n, y_n)\right) , \\ y_ &= y_ + h\left( \frac f(t_, y_) - \frac f(t_, y_) + \frac f(t_, y_) - \frac f(t_n, y_n) \right) , \\ y_ &= y_ + h\left( \frac f(t_, y_) - \frac f(t_, y_) + \frac f(t_, y_) - \frac f(t_, y_) + \frac f(t_n, y_n) \right) . \end The coefficients b_j can be determined as follows. Use
polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
to find the polynomial ''p'' of degree s-1 such that p(t_) = f(t_, y_), \qquad \text i=0,\ldots,s-1. The Lagrange formula for polynomial interpolation yields p(t) = \sum_^ \frac \prod_^ (t-t_). The polynomial ''p'' is locally a good approximation of the right-hand side of the differential equation y' = f(t,y) that is to be solved, so consider the equation y' = p(t) instead. This equation can be solved exactly; the solution is simply the integral of ''p''. This suggests taking y_ = y_ + \int_^ p(t)\,\mathrm dt. The Adams–Bashforth method arises when the formula for ''p'' is substituted. The coefficients b_j turn out to be given by b_ = \frac \int_0^1 \prod_^ (u+i) \,\mathrm du, \qquad \text j=0,\ldots,s-1. Replacing f(t, y) by its interpolant ''p'' incurs an error of order ''h''''s'', and it follows that the ''s''-step Adams–Bashforth method has indeed order ''s'' The Adams–Bashforth methods were designed by
John Couch Adams John Couch Adams (; 5 June 1819 – 21 January 1892) was a British mathematician and astronomer. He was born in Laneast, near Launceston, Cornwall, and died in Cambridge. His most famous achievement was predicting the existence and position o ...
to solve a differential equation modelling
capillary action Capillary action (sometimes called capillarity, capillary motion, capillary rise, capillary effect, or wicking) is the process of a liquid flowing in a narrow space without the assistance of, or even in opposition to, any external forces li ...
due to Francis Bashforth. published his theory and Adams' numerical method .


Adams–Moulton methods

The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have a_ = -1 and a_ = \cdots = a_0 = 0 . Again the ''b'' coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that b_s = 0 , an ''s''-step Adams–Moulton method can reach order s+1 , while an ''s''-step Adams–Bashforth methods has only order ''s''. The Adams–Moulton methods with ''s'' = 0, 1, 2, 3, 4 are (; ) listed, where the first two methods are the
backward Euler method In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but d ...
and the
trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works b ...
respectively: \begin y_ &= y_ + h f(t_,y_), \\ y_ &= y_n + \frac h \left( f(t_,y_) + f(t_n,y_n) \right), \\ y_ &= y_ + h \left( \frac f(t_,y_) + \frac f(t_,y_) - \frac f(t_n,y_n) \right) , \\ y_ &= y_ + h \left( \frac f(t_,y_) + \frac f(t_,y_) - \frac f(t_,y_) + \frac f(t_n,y_n) \right) , \\ y_ &= y_ + h \left( \frac f(t_,y_) + \frac f(t_,y_) - \frac f(t_,y_) + \frac f(t_,y_) - \frac f(t_n,y_n) \right) . \end The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points t_,\dots, t_ , as above, but also t_n . The coefficients are given by b_ = \frac \int_0^1 \prod_^ (u+i-1) \,\mathrm du, \qquad \text j=0,\ldots,s. The Adams–Moulton methods are solely due to
John Couch Adams John Couch Adams (; 5 June 1819 – 21 January 1892) was a British mathematician and astronomer. He was born in Laneast, near Launceston, Cornwall, and died in Cambridge. His most famous achievement was predicting the existence and position o ...
, like the Adams–Bashforth methods. The name of
Forest Ray Moulton Forest Ray Moulton (April 29, 1872 – December 7, 1952) was an American astronomer. Biography He was born in Le Roy, Michigan, and was educated at Albion College. After graduating in 1894 (Bachelor of Arts, A.B.), he performed his graduate s ...
became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-corrector pair ; had the same idea. Adams used
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
to solve the implicit equation .


Backward differentiation formulas (BDF)

The BDF methods are implicit methods with b_ = \cdots = b_0 = 0 and the other coefficients chosen such that the method attains order ''s'' (the maximum possible). These methods are especially used for the solution of stiff differential equations.


Analysis

The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability.


Consistency and order

The first question is whether the method is consistent: is the difference equation \begin & a_y_ + a_ y_ + a_ y_ + \cdots + a_0 y_n \\ & \qquad = h \bigl( b_s f(t_,y_) + b_ f(t_,y_) + \cdots + b_0 f(t_n,y_n) \bigr), \end a good approximation of the differential equation y' = f(t,y) ? More precisely, a multistep method is ''consistent'' if the
local truncation error Truncation errors in numerical integration are of two kinds: * ''local truncation errors'' – the error caused by one iteration, and * ''global truncation errors'' – the cumulative error caused by many iterations. Definitions Suppose we have ...
goes to zero faster than the step size ''h'' as ''h'' goes to zero, where the ''local truncation error'' is defined to be the difference between the result y_ of the method, assuming that all the previous values y_, \ldots, y_n are exact, and the exact solution of the equation at time t_. A computation using
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
shows that a linear multistep method is consistent if and only if \sum_^ a_k = -1 \quad\text\quad \sum_^s b_k = s + \sum_^ k a_k. All the methods mentioned above are consistent . If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have ''order'' ''p'' if the local error is of order O(h^) as ''h'' goes to zero. This is equivalent to the following condition on the coefficients of the methods: \sum_^ a_k = -1 \quad\text\quad q \sum_^s k^ b_k = s^q + \sum_^ k^q a_k \text q=1,\ldots,p. The ''s''-step Adams–Bashforth method has order ''s'', while the ''s''-step Adams–Moulton method has order s+1 . These conditions are often formulated using the ''characteristic polynomials'' \rho(z) = z^s + \sum_^ a_k z^k \quad\text\quad \sigma(z) = \sum_^s b_k z^k. In terms of these polynomials, the above condition for the method to have order ''p'' becomes \rho(e^h) - h\sigma(e^h) = O(h^) \quad \text h\to 0. In particular, the method is consistent if it has order at least one, which is the case if \rho(1)=0 and \rho'(1)=\sigma(1).


Stability and convergence

The numerical solution of a one-step method depends on the initial condition y_0 , but the numerical solution of an ''s''-step method depend on all the ''s'' starting values, y_0, y_1, \ldots, y_ . It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is ''zero-stable'' for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than ''K''ε for some value of ''K'' which does not depend on the step size ''h''. This is called "zero-stability" because it is enough to check the condition for the differential equation y' = 0 . If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the ''root condition'' is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied . Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values y_1, \ldots, y_ all converge to the initial value y_0 as h \to 0 . Then, the numerical solution converges to the exact solution as h \to 0 if and only if the method is zero-stable. This result is known as the ''Dahlquist equivalence theorem'', named after
Germund Dahlquist Germund Dahlquist (16 January 1925 – 8 February 2005) was a Swedish mathematician known primarily for his early contributions to the theory of numerical analysis as applied to differential equations. Dahlquist began to study mathematics at Sto ...
; this theorem is similar in spirit to the
Lax equivalence theorem Los Angeles International Airport , commonly referred to as LAX (with each letter pronounced individually), is the primary international airport serving Los Angeles, California and its surrounding metropolitan area. LAX is located in the W ...
for
finite difference method In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval (if applicable) are ...
s. Furthermore, if the method has order ''p'', then the global error (the difference between the numerical solution and the exact solution at a fixed time) is O(h^p) . Furthermore, if the method is convergent, the method is said to be ''strongly stable'' if z=1 is the only root of modulus 1. If it is convergent and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be ''relatively stable''. Note that 1 must be a root for the method to be convergent; thus convergent methods are always one of these two. To assess the performance of linear multistep methods on
stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
s, consider the linear test equation ''y = λ''y''. A multistep method applied to this differential equation with step size ''h'' yields a linear
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
with characteristic polynomial \pi(z; h\lambda) = (1 - h\lambda\beta_s) z^s + \sum_^ (\alpha_k - h\lambda\beta_k) z^k = \rho(z) - h\lambda\sigma(z). This polynomial is called the ''stability polynomial'' of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be ''absolutely stable'' for that value of ''h''λ. The method is said to be ''A-stable'' if it is absolutely stable for all ''h''λ with negative real part. The ''region of absolute stability'' is the set of all ''h''λ for which the multistep method is absolutely stable . For more details, see the section on stiff equations and multistep methods.


Example

Consider the Adams–Bashforth three-step method y_ = y_ + h\left( f(t_, y_) - f(t_, y_) + f(t_, y_)\right). One characteristic polynomial is thus \rho(z) = z^3-z^2 = z^2(z-1) which has roots z=0, 1, and the conditions above are satisfied. As z=1 is the only root of modulus 1, the method is strongly stable. The other characteristic polynomial is \sigma(z) = \frac z^2 - \frac z + \frac


First and second Dahlquist barriers

These two results were proved by
Germund Dahlquist Germund Dahlquist (16 January 1925 – 8 February 2005) was a Swedish mathematician known primarily for his early contributions to the theory of numerical analysis as applied to differential equations. Dahlquist began to study mathematics at Sto ...
and represent an important bound for the order of convergence and for the
A-stability In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
of a linear multistep method. The first Dahlquist barrier was proved in and the second in .


First Dahlquist barrier

The first Dahlquist barrier states that a zero-stable and linear ''q''-step multistep method cannot attain an order of convergence greater than ''q'' + 1 if ''q'' is odd and greater than ''q'' + 2 if ''q'' is even. If the method is also explicit, then it cannot attain an order greater than ''q'' .


Second Dahlquist barrier

The second Dahlquist barrier states that no explicit linear multistep methods are A-stable. Further, the maximal order of an (implicit) A-stable linear multistep method is 2. Among the A-stable linear multistep methods of order 2, the trapezoidal rule has the smallest error constant .


See also

*
Digital energy gain In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in sim ...


References

* . * . * . * . * . * . * . * . * . * . * . * .


External links

* {{Numerical integrators Numerical differential equations Numerical analysis