
Numerical methods for ordinary differential equations are methods used to find
numerical approximations to the solutions of
ordinary differential equations (ODEs). Their use is also known as "
numerical integration", although this term can also refer to the computation of
integrals.
Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The
algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from
calculus to obtain a
series expansion of the solution.
Ordinary differential equations occur in many scientific disciplines, including
physics,
chemistry
Chemistry is the science, scientific study of the properties and behavior of matter. It is a natural science that covers the Chemical element, elements that make up matter to the chemical compound, compounds made of atoms, molecules and ions ...
,
biology, and
economics. In addition, some methods in
numerical partial differential equations convert the
partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function.
The function is often thought of as an "unknown" to be sol ...
into an ordinary differential equation, which must then be solved.
The problem
A first-order differential equation is an
Initial value problem (IVP) of the form,
where
is a function
, and the initial condition
is a given vector. ''First-order'' means that only the first derivative of ''y'' appears in the equation, and higher derivatives are absent.
Without loss of generality to higher-order systems, we restrict ourselves to ''first-order'' differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables. For example, the second-order equation can be rewritten as two first-order equations: and
In this section, we describe numerical methods for IVPs, and remark that ''boundary value problems'' (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution ''y'' at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the
shooting method
In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to an initial value problem. It involves finding solutions to the initial value problem for different initial conditions until one finds the ...
(and its variants) or global methods like
finite differences,
[LeVeque, R. J. (2007). Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems (Vol. 98). SIAM.] Galerkin methods, or
collocation methods are appropriate for that class of problems.
The
Picard–Lindelöf theorem states that there is a unique solution, provided ''f'' is
Lipschitz-continuous
In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exis ...
.
Methods
Numerical methods for solving first-order IVPs often fall into one of two large categories:
linear multistep methods, or
Runge–Kutta methods
In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. The ...
. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit
linear multistep methods include
Adams-Moulton methods, and
backward differentiation methods (BDF), whereas
implicit Runge–Kutta methods include diagonally implicit Runge–Kutta (DIRK), singly diagonally implicit Runge–Kutta (SDIRK), and Gauss–Radau (based on
Gaussian quadrature) numerical methods. Explicit examples from the
linear multistep family include the
Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal
Butcher tableau is
explicit
Explicit refers to something that is specific, clear, or detailed. It can also mean:
* Explicit knowledge, knowledge that can be readily articulated, codified and transmitted to others
* Explicit (text) The explicit (from Latin ''explicitus est'', ...
. A loose rule of thumb dictates that
stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes.
The so-called
general linear methods (GLMs) are a generalization of the above two large classes of methods.
Euler method
From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line
tangent to the curve.
Starting with the differential equation (), we replace the derivative ''y''′ by the
finite difference approximation
which when re-arranged yields the following formula
:
and using () gives:
This formula is usually applied in the following way. We choose a step size ''h'', and we construct the sequence
We denote by
a numerical estimate of the exact solution
. Motivated by (), we compute these estimates by the following
recursive scheme
This is the ''
Euler method'' (or ''
forward Euler method
In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit m ...
'', in contrast with the ''backward Euler method'', to be described below). The method is named after
Leonhard Euler who described it in 1768.
The Euler method is an example of an
''explicit'' method. This means that the new value ''y''
''n''+1 is defined in terms of things that are already known, like ''y''
''n''.
Backward Euler method
If, instead of (), we use the approximation
we get the ''backward Euler method'':
The backward Euler method is an
''implicit'' method, meaning that we have to solve an equation to find ''y''
''n''+1. One often uses
fixed-point iteration
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.
More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point iterat ...
or (some modification of) the
Newton–Raphson method to achieve this.
It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as () is that they are usually more stable for solving a
stiff equation, meaning that a larger step size ''h'' can be used.
First-order exponential integrator method
Exponential integrators describe a large class of integrators that have recently seen a lot of development.
[ This is a modern and extensive review paper for exponential integrators] They date back to at least the 1960s.
In place of (), we assume the differential equation is either of the form
or it has been locally linearized about a background state to produce a linear term
and a nonlinear term
.
Exponential integrators are constructed by multiplying () by
, and exactly integrating the result over
a time interval