Numerical methods for ordinary differential equations are methods used to find
numerical approximations to the solutions of
ordinary differential equation
In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contras ...
s (ODEs). Their use is also known as "
numerical integration
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
", although this term can also refer to the computation of
integral
In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
s.
Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s studied here can be used to compute such an approximation. An alternative method is to use techniques from
calculus
Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizati ...
to obtain a
series expansion
In mathematics, a series expansion is an expansion of a function into a series, or infinite sum. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and divis ...
of the solution.
Ordinary differential equations occur in many scientific disciplines, including
physics
Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which rel ...
,
chemistry,
biology
Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditar ...
, and
economics
Economics () is the social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analy ...
. In addition, some methods in
numerical partial differential equations
Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).
In principle, specialized methods for hyperbolic, parabolic or elliptic part ...
convert the
partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function.
The function is often thought of as an "unknown" to be solved for, similarly to ...
into an ordinary differential equation, which must then be solved.
The problem
A first-order differential equation is an
Initial value problem
In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or o ...
(IVP) of the form,
where
is a function
, and the initial condition
is a given vector. ''First-order'' means that only the first derivative of ''y'' appears in the equation, and higher derivatives are absent.
Without loss of generality to higher-order systems, we restrict ourselves to ''first-order'' differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables. For example, the second-order equation can be rewritten as two first-order equations: and
In this section, we describe numerical methods for IVPs, and remark that ''boundary value problems'' (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution ''y'' at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the
shooting method (and its variants) or global methods like
finite difference
A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
s,
[LeVeque, R. J. (2007). Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems (Vol. 98). SIAM.] Galerkin method
In mathematics, in the area of numerical analysis, Galerkin methods, named after the Russian mathematician Boris Galerkin, convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete prob ...
s, or
collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
s are appropriate for that class of problems.
The
Picard–Lindelöf theorem states that there is a unique solution, provided ''f'' is
Lipschitz-continuous.
Methods
Numerical methods for solving first-order IVPs often fall into one of two large categories:
linear multistep method
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s, or
Runge–Kutta methods
In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. Th ...
. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit
linear multistep method
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s include
Adams-Moulton methods, and
backward differentiation methods (BDF), whereas
implicit Runge–Kutta methods include diagonally implicit Runge–Kutta (DIRK), singly diagonally implicit Runge–Kutta (SDIRK), and Gauss–Radau (based on
Gaussian quadrature) numerical methods. Explicit examples from the
linear multistep family include the
Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal
Butcher tableau is
explicit
Explicit refers to something that is specific, clear, or detailed. It can also mean:
* Explicit knowledge, knowledge that can be readily articulated, codified and transmitted to others
* Explicit (text) The explicit (from Latin ''explicitus est'', ...
. A loose rule of thumb dictates that
stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes.
The so-called
general linear methods (GLMs) are a generalization of the above two large classes of methods.
Euler method
From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line
tangent
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. Mo ...
to the curve.
Starting with the differential equation (), we replace the derivative ''y''′ by the
finite difference
A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
approximation
which when re-arranged yields the following formula
:
and using () gives:
This formula is usually applied in the following way. We choose a step size ''h'', and we construct the sequence
We denote by
a numerical estimate of the exact solution
. Motivated by (), we compute these estimates by the following
recursive
Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathemati ...
scheme
This is the ''
Euler method'' (or ''
forward Euler method'', in contrast with the ''backward Euler method'', to be described below). The method is named after
Leonhard Euler
Leonhard Euler ( , ; 15 April 170718 September 1783) was a Swiss mathematician, physicist, astronomer, geographer, logician and engineer who founded the studies of graph theory and topology and made pioneering and influential discoveries in ma ...
who described it in 1768.
The Euler method is an example of an
''explicit'' method. This means that the new value ''y''
''n''+1 is defined in terms of things that are already known, like ''y''
''n''.
Backward Euler method
If, instead of (), we use the approximation
we get the ''backward Euler method'':
The backward Euler method is an
''implicit'' method, meaning that we have to solve an equation to find ''y''
''n''+1. One often uses
fixed-point iteration
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.
More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point itera ...
or (some modification of) the
Newton–Raphson method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
to achieve this.
It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as () is that they are usually more stable for solving a
stiff equation
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
, meaning that a larger step size ''h'' can be used.
First-order exponential integrator method
Exponential integrators describe a large class of integrators that have recently seen a lot of development.
[ This is a modern and extensive review paper for exponential integrators] They date back to at least the 1960s.
In place of (), we assume the differential equation is either of the form
or it has been locally linearized about a background state to produce a linear term
and a nonlinear term
.
Exponential integrators are constructed by multiplying () by
, and exactly integrating the result over
a time interval