Exponential Integrator
   HOME

TheInfoList



OR:

Exponential integrators are a class of
numerical method In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm. Mathem ...
s for the solution of
ordinary differential equations In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast w ...
, specifically
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or oth ...
s. This large class of methods from
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
is based on the exact integration of the
linear Linearity is the property of a mathematical relationship (''function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear r ...
part of the initial value problem. Because the linear part is integrated exactly, this can help to mitigate the
stiffness Stiffness is the extent to which an object resists deformation in response to an applied force. The complementary concept is flexibility or pliability: the more flexible an object is, the less stiff it is. Calculations The stiffness, k, of a b ...
of a differential equation. Exponential integrators can be constructed to be explicit or implicit for
numerical ordinary differential equations Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also ...
or serve as the time integrator for numerical partial differential equations.


Background

Dating back to at least the 1960s, these methods were recognized by Certaine and Pope. As of late exponential integrators have become an active area of research, see Hochbruck and Ostermann (2010). Originally developed for solving stiff differential equations, the methods have been used to solve
partial differential equations In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. The function is often thought of as an "unknown" to be solved for, similarly to ...
including hyperbolic as well as parabolic problems such as the
heat equation In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for t ...
.


Introduction

We consider
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or oth ...
s of the form, :u'(t) = L u(t) + N(u(t) ), \qquad u(t_0)=u_0, \qquad \qquad (1) where L is composed of linear terms, and N is composed of the
non-linear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other ...
terms. These problems can come from a more typical initial value problem :u'(t) = f(u(t)), \qquad u(t_0)=u_0, after linearizing locally about a fixed or local state u^*: : L = \frac(u^*); \qquad N = f(u) - L u. Here, \frac refers to the
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Part ...
of f with respect to u (the Jacobian of f). Exact integration of this problem from time 0 to a later time t can be performed using matrix exponentials to define an integral equation for the exact solution: : u(t) = e^ u_0 + \int_^ e^ N\left( u\left( \tau \right) \right)\, d\tau. \qquad (2) This is similar to the exact integral used in the
Picard–Lindelöf theorem In mathematics – specifically, in differential equations – the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauc ...
. In the case of N\equiv 0, this formulation is the exact solution to the linear differential equation. Numerical methods require a
discretization In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical ...
of equation (2). They can be based on Runge-Kutta discretizations, linear multistep methods or a variety of other options.


Exponential Rosenbrock methods

Exponential Rosenbrock methods were shown to be very efficient in solving large systems of stiff ordinary differential equations, usually resulted from spatial discretization of time dependent (parabolic) PDEs. These integrators are constructed based on a continuous linearization of (1) along the numerical solution u_n : u'(t)=L_ u(t)+ N_n(u(t)), \qquad u(t_0)=u_0, \qquad (3) where L_=\frac(u_n), N_n (u)=f(u)-L_ u. This procedure enjoys the advantage, in each step, that \frac(u_n)= 0. This considerably simplifies the derivation of the order conditions and improves the stability when integrating the nonlinearity N(u(t)). Again, applying the variation-of-constants formula (2) gives the exact solution at time t_ as : u(t_)= e^u(t_n) + \int_^ e^ N_n ( u(t_n+\tau )) d\tau. \qquad (4) The idea now is to approximate the integral in (4) by some quadrature rule with nodes c_i and weights b_i(h_n L_n) (1\leq i\leq s). This yields the following class of s-stage explicit exponential Rosenbrock methods, see Hochbruck and Ostermann (2006), Hochbruck, Ostermann and Schweitzer (2009): : U_= e^u_n +h_n \sum_^a_(h_n L_n) N_n( U_), : u_ = e^u_n + h_n \sum_^b_(h_n L_n)N_n(U_) with u_n \approx u(t_n), U_\approx u(t_n +c_i h_n), h_n = t_-t_n . The coefficients a_(z), b_i (z) are usually chosen as linear combinations of the entire functions \varphi_(c_i z), \varphi_(z) , respectively, where : \varphi_0(z) = e^z,\quad \varphi _(z)=\int_^ e^ \frac d\theta , \quad k\geq 1. These functions satisfy the recursion relation : \varphi _(z)=\frac, \ k\geq 0. By introducing the difference D_=N_n (U_)-N_n (u_n), they can be reformulated in a more efficient way for implementation (see also ) as : U_= u_n + c_i h_n \varphi _ ( c_i h_n L_n)f(u_n) +h_n \sum_^a_(h_n L_n) D_, : u_= u_n + h_n \varphi _ ( h_n L_n)f(u_n) + h_n \sum_^b_(h_n L_n) D_. In order to implement this scheme with adaptive step size, one can consider, for the purpose of local error estimation, the following embedded methods : \bar_= u_n + h_n \varphi _ ( h_n L_n)f(u_n) + h_n \sum_^ \bar_(h_n L_n) D_, which use the same stages U_ but with weights \bar_. For convenience, the coefficients of the explicit exponential Rosenbrock methods together with their embedded methods can be represented by using the so-called reduced Butcher tableau as follows: :


Stiff order conditions

Moreover, it is shown in Luan and Ostermann (2014a) that the reformulation approach offers a new and simple way to analyze the local errors and thus to derive the stiff order conditions for exponential Rosenbrock methods up to order 5. With the help of this new technique together with an extension of the B-series concept, a theory for deriving the stiff order conditions for exponential Rosenbrock integrators of arbitrary order has been finally given in Luan and Ostermann (2013). As an example, in that work the stiff order conditions for exponential Rosenbrock methods up to order 6 have been derived, which are stated in the following table: : \begin \hline No. & \text & \text \\ \hline 1&\sum_^ b_i (Z)c^2_i=2\varphi_3 (Z) &3 \\ \hline 2&\sum_^ b_i (Z)c^3_i=6\varphi_4 (Z) &4 \\ \hline 3&\sum_^ b_i (Z)c^4_i=24\varphi_5 (Z) &5 \\ 4&\sum_^ b_i (Z)c_i K \psi _(Z)=0 &5 \\ \hline 5&\sum_^b_(Z)c_i^5 = 120\varphi_6(Z) & 6\\ 6&\sum_^b_(Z) c_i^2 M \psi_(Z)=0 & 6 \\ 7&\sum_^b_(Z) c_i K \psi_(Z)=0 & 6\\ \hline \end Here Z, K, M.\ denote arbitrary square matrices.


Convergence analysis

The stability and convergence results for exponential Rosenbrock methods are proved in the framework of strongly continuous semigroups in some Banach space.


Examples

All the schemes presented below fulfill the stiff order conditions and thus are also suitable for solving stiff problems.


Second-order method

The simplest exponential Rosenbrock method is the exponential Rosenbrock–Euler scheme, which has order 2, see, for example Hochbruck et al. (2009): : u_ =u_n + h_n \ \varphi_1(h_n L_n) f(u_n).


Third-order methods

A class of third-order exponential Rosenbrock methods was derived in Hochbruck et al. (2009), named as exprb32, is given as: exprb32: : which reads as : U_ =u_n + h_n \ \varphi_1( h_n L_n) f(u_n), : u_ =u_n + h_n \ \varphi_1(h_n L_n) f(u_n)+ h_n \ 2\varphi_3(h_n L_n) D_, where D_=N_n (U_)-N_n (u_n). For a variable step size implementation of this scheme, one can embed it with the exponential Rosenbrock–Euler: : \hat_ =u_n + h_n \ \varphi_1(h_n L_n) f(u_n).


Fourth-order ETDRK4 method of Cox and Matthews

Cox and Matthews describe a fourth-order method exponential time differencing (ETD) method that they used
Maple ''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since http ...
to derive. We use their notation, and assume that the unknown function is u, and that we have a known solution u_n at time t_n. Furthermore, we'll make explicit use of a possibly time dependent right hand side: \mathcal = \mathcal( u, t ). Three stage values are first constructed: : a_n = e^ u_n + L^ \left( e^ - I \right) \mathcal( u_n, t_n ) : b_n = e^ u_n + L^ \left( e^ - I \right) \mathcal( a_n, t_n + h/2 ) : c_n = e^ a_n + L^ \left( e^ - I \right) \left( 2 \mathcal( b_n, t_n + h/2 ) - \mathcal(u_n,t_n) \right) The final update is given by, : u_ = e^ u_n + h^ L^ \left\. If implemented naively, the above algorithm suffers from numerical instabilities due to
floating point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
round-off errors. To see why, consider the first function, : \varphi_1( z ) = \frac, which is present in the first-order Euler method, as well as all three stages of ETDRK4. For small values of z, this function suffers from numerical cancellation errors. However, these numerical issues can be avoided by evaluating the \varphi_1 function via a contour integral approach or by a Padé approximant.


Applications

Exponential integrators are used for the simulation of stiff scenarios in scientific and
visual The visual system comprises the sensory organ (the eye) and parts of the central nervous system (the retina containing photoreceptor cells, the optic nerve, the optic tract and the visual cortex) which gives organisms the sense of sight (the ...
computing, for example in
molecular dynamics Molecular dynamics (MD) is a computer simulation method for analyzing the physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the ...
, for
VLSI Very large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (Metal Oxide Semiconductor) c ...
circuit simulation, and in
computer graphics Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great de ...
. They are also applied in the context of
hybrid monte carlo The Hamiltonian Monte Carlo algorithm (originally known as hybrid Monte Carlo) is a Markov chain Monte Carlo method for obtaining a sequence of random samples which converge to being distributed according to a target probability distribution for whi ...
methods. In these applications, exponential integrators show the advantage of large time stepping capability and high accuracy. To accelerate the evaluation of matrix functions in such complex scenarios, exponential integrators are often combined with Krylov subspace projection methods.


See also

* General linear methods


Notes


References

* * * * * * * * * * * * * * * * * * * * *


External links


integrators on GPGPUs

code for a meshfree exponential integrator
{{DEFAULTSORT:Numerical Ordinary Differential Equations Ordinary differential equations Numerical analysis