Runge–Kutta Methods
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
, used in
temporal discretization Temporal discretization is a mathematical technique applied to transient problems that occur in the fields of applied physics and engineering. Transient problems are often solved by conducting simulations using computer-aided engineering (CAE) pa ...
for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians
Carl Runge Carl David Tolmé Runge (; 30 August 1856 – 3 January 1927) was a German mathematician, physicist, and spectroscopist. He was co-developer and co-eponym of the Runge–Kutta method (German pronunciation: ), in the field of what is today known a ...
and Wilhelm Kutta.


The Runge–Kutta method

The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or oth ...
be specified as follows: : \frac = f(t, y), \quad y(t_0) = y_0. Here y is an unknown function (scalar or vector) of time t, which we would like to approximate; we are told that \frac, the rate at which y changes, is a function of t and of y itself. At the initial time t_0 the corresponding y value is y_0. The function f and the
initial conditions In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value, is a value of an evolving variable at some point in time designated as the initial time (typically denoted ''t'' = 0). For ...
t_0, y_0 are given. Now we pick a step-size ''h'' > 0 and define: :\begin y_ &= y_n + \frac\left(k_1 + 2k_2 + 2k_3 + k_4 \right)h,\\ t_ &= t_n + h \\ \end for ''n'' = 0, 1, 2, 3, ..., using : \begin k_1 &= \ f(t_n, y_n), \\ k_2 &= \ f\!\left(t_n + \frac, y_n + h\frac\right), \\ k_3 &= \ f\!\left(t_n + \frac, y_n + h\frac\right), \\ k_4 &= \ f\!\left(t_n + h, y_n + hk_3\right). \end :''(Note: the above equations have different but equivalent definitions in different texts).'', , and leave out the factor ''h'' in the definition of the stages. , and use the ''y'' values as stages. Here y_ is the RK4 approximation of y(t_), and the next value (y_) is determined by the present value (y_n) plus the
weighted average The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
of four increments, where each increment is the product of the size of the interval, ''h'', and an estimated slope specified by function ''f'' on the right-hand side of the differential equation. * k_1 is the slope at the beginning of the interval, using y (
Euler's method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
); * k_2 is the slope at the midpoint of the interval, using y and k_1 ; * k_3 is again the slope at the midpoint, but now using y and k_2 ; * k_4 is the slope at the end of the interval, using y and k_3 . In averaging the four slopes, greater weight is given to the slopes at the midpoint. If f is independent of y, so that the differential equation is equivalent to a simple integral, then RK4 is
Simpson's rule In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads \int_a^b f(x) \, ...
. The RK4 method is a fourth-order method, meaning that the
local truncation error Truncation errors in numerical integration are of two kinds: * ''local truncation errors'' – the error caused by one iteration, and * ''global truncation errors'' – the cumulative error caused by many iterations. Definitions Suppose we have ...
is
on the order of An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually 10, interpreted as the base of the logarithm and the representative of values of magnitude one. Logarithmic dis ...
O(h^5), while the total accumulated error is on the order of O(h^4). In many practical applications the function f is independent of t (so called autonomous system, or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function f, with only the final formula for t_ used.


Explicit Runge–Kutta methods

The family of
explicit Explicit refers to something that is specific, clear, or detailed. It can also mean: * Explicit knowledge, knowledge that can be readily articulated, codified and transmitted to others * Explicit (text) The explicit (from Latin ''explicitus est'', ...
Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by : y_ = y_n + h \sum_^s b_i k_i, where : \begin k_1 & = f(t_n, y_n), \\ k_2 & = f(t_n+c_2h, y_n+(a_k_1)h), \\ k_3 & = f(t_n+c_3h, y_n+(a_k_1+a_k_2)h), \\ & \ \ \vdots \\ k_s & = f(t_n+c_sh, y_n+(a_k_1+a_k_2+\cdots+a_k_)h). \end :''(Note: the above equations may have different but equivalent definitions in some texts).'' To specify a particular method, one needs to provide the integer ''s'' (the number of stages), and the coefficients ''aij'' (for 1 ≤ ''j'' < ''i'' ≤ ''s''), ''bi'' (for ''i'' = 1, 2, ..., ''s'') and ''ci'' (for ''i'' = 2, 3, ..., ''s''). The matrix 'aij''is called the ''Runge–Kutta matrix'', while the ''bi'' and ''ci'' are known as the ''weights'' and the ''nodes''. These data are usually arranged in a mnemonic device, known as a ''Butcher tableau'' (after
John C. Butcher John Charles Butcher (born 31 March 1933) is a New Zealand mathematician who specialises in numerical methods for the solution of ordinary differential equations.. Butcher works on multistage methods for initial value problems, such as Runge- ...
): : A
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
expansion shows that the Runge–Kutta method is consistent if and only if :\sum_^ b_ = 1. There are also accompanying requirements if one requires the method to have a certain order ''p'', meaning that the local truncation error is O(''hp''+1). These can be derived from the definition of the truncation error itself. For example, a two-stage method has order 2 if ''b''1 + ''b''2 = 1, ''b''2''c''2 = 1/2, and ''b''2''a''21 = 1/2. Note that a popular condition for determining coefficients is :\sum_^ a_ = c_i \text i=2, \ldots, s. This condition alone, however, is neither sufficient, nor necessary for consistency. In general, if an explicit s-stage Runge–Kutta method has order p, then it can be proven that the number of stages must satisfy s \ge p, and if p \ge 5, then s \ge p+1. However, it is not known whether these bounds are ''sharp'' in all cases; for example, all known methods of order 8 have at least 11 stages, though it is possible that there are methods with fewer stages. (The bound above suggests that there could be a method with 9 stages; but it could also be that the bound is simply not sharp.) Indeed, it is an open problem what the precise minimum number of stages s is for an explicit Runge–Kutta method to have order p in those cases where no methods have yet been discovered that satisfy the bounds above with equality. Some values which are known are: : \begin p & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \min s & 1 & 2 & 3 & 4 & 6 & 7 & 9 & 11 \end The provable bounds above then imply that we can not find methods of orders p=1, 2, \ldots, 6 that require fewer stages than the methods we already know for these orders. However, it is conceivable that we might find a method of order p=7 that has only 8 stages, whereas the only ones known today have at least 9 stages as shown in the table.


Examples

The RK4 method falls in this framework. Its tableau is : A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is : However, the simplest Runge–Kutta method is the (forward)
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
, given by the formula y_ = y_n + hf(t_n, y_n) . This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is :


Second-order methods with two stages

An example of a second-order method with two stages is provided by the
midpoint method In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation, : y'(t) = f(t, y(t)), \quad y(t_0) = y_0 . The explicit midpoint method is given by the formula ...
: : y_ = y_n + hf\left(t_n+\frach, y_n+\frachf(t_n,\ y_n)\right). The corresponding tableau is : The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula : y_ = y_n + h\bigl( (1-\tfrac1) f(t_n, y_n) + \tfrac1 f(t_n + \alpha h, y_n + \alpha h f(t_n, y_n)) \bigr). Its Butcher tableau is : In this family, \alpha=\tfrac12 gives the midpoint method, \alpha=1 is
Heun's method In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical p ...
, and \alpha=\tfrac23 is Ralston's method.


Use

As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method. It is given by the tableau with the corresponding equations : \begin k_1 &= f(t_n,\ y_n), \\ k_2 &= f(t_n + \tfrach,\ y_n + \tfrach k_1), \\ y_ &= y_n + h\left(\tfrack_1+\tfrack_2\right). \end This method is used to solve the initial-value problem : \frac = \tan(y)+1,\quad y_0=1,\ t\in , 1.1/math> with step size ''h'' = 0.025, so the method needs to take four steps. The method proceeds as follows: The numerical solutions correspond to the underlined values.


Adaptive Runge–Kutta methods

Adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order p and one with order p - 1. These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to a step with the higher-order method. During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost) optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size. The lower-order step is given by : y^*_ = y_n + h\sum_^s b^*_i k_i, where k_i are the same as for the higher-order method. Then the error is : e_ = y_ - y^*_ = h\sum_^s (b_i - b^*_i) k_i, which is O(h^p). The Butcher tableau for this kind of method is extended to give the values of b^*_i: The
Runge–Kutta–Fehlberg method In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is base ...
has two methods of orders 5 and 4. Its extended Butcher tableau is: However, the simplest adaptive Runge–Kutta method involves combining
Heun's method In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical p ...
, which is order 2, with the
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
, which is order 1. Its extended Butcher tableau is: Other adaptive Runge–Kutta methods are the
Bogacki–Shampine method The Bogacki–Shampine method is a method for the numerical solution of ordinary differential equations, that was proposed by Przemysław Bogacki and Lawrence F. Shampine in 1989 . The Bogacki–Shampine method is a Runge–Kutta method of order t ...
(orders 3 and 2), the
Cash–Karp method In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations (ODEs). It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a memb ...
and the
Dormand–Prince method In numerical analysis, the Dormand–Prince (RKDP) method or DOPRI method, is an embedded method for solving ordinary differential equations . The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six func ...
(both with orders 5 and 4).


Nonconfluent Runge–Kutta methods

A Runge–Kutta method is said to be ''nonconfluent'' if all the c_i,\,i=1,2,\ldots,s are distinct.


Runge–Kutta–Nyström methods

Runge–Kutta–Nyström methods are specialized Runge-Kutta methods that are optimized for second-order differential equations of the following form: : \frac= f(y,\dot,t).


Implicit Runge–Kutta methods

All Runge–Kutta methods mentioned up to now are explicit methods. Explicit Runge–Kutta methods are generally unsuitable for the solution of
stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
s because their region of absolute stability is small; in particular, it is bounded. This issue is especially important in the solution of
partial differential equations In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. The function is often thought of as an "unknown" to be solved for, similarly to ...
. The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form : y_ = y_n + h \sum_^s b_i k_i, where : k_i = f\left( t_n + c_i h,\ y_ + h \sum_^s a_ k_j \right), \quad i = 1, \ldots, s. The difference with an explicit method is that in an explicit method, the sum over ''j'' only goes up to ''i'' − 1. This also shows up in the Butcher tableau: the coefficient matrix a_ of an explicit method is lower triangular. In an implicit method, the sum over ''j'' goes up to ''s'' and the coefficient matrix is not triangular, yielding a Butcher tableau of the form : \begin c_1 & a_ & a_& \dots & a_\\ c_2 & a_ & a_& \dots & a_\\ \vdots & \vdots & \vdots& \ddots& \vdots\\ c_s & a_ & a_& \dots & a_ \\ \hline & b_1 & b_2 & \dots & b_s\\ & b^*_1 & b^*_2 & \dots & b^*_s\\ \end = \begin \mathbf& A\\ \hline & \mathbf \\ \end See Adaptive Runge-Kutta methods above for the explanation of the b^* row. The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with ''s'' stages is used to solve a differential equation with ''m'' components, then the system of algebraic equations has ''ms'' components. This can be contrasted with implicit
linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s (the other big family of methods for ODEs): an implicit ''s''-step linear multistep method needs to solve a system of algebraic equations with only ''m'' components, so the size of the system does not increase as the number of steps increases.


Examples

The simplest example of an implicit Runge–Kutta method is the
backward Euler method In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but d ...
: :y_ = y_n + h f(t_n + h,\ y_). \, The Butcher tableau for this is simply: : \begin 1 & 1 \\ \hline & 1 \\ \end This Butcher tableau corresponds to the formulae : k_1 = f(t_n + h,\ y_n + h k_1) \quad\text\quad y_ = y_n + h k_1, which can be re-arranged to get the formula for the backward Euler method listed above. Another example for an implicit Runge–Kutta method is the
trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works b ...
. Its Butcher tableau is: : \begin 0 & 0& 0\\ 1 & \frac& \frac\\ \hline & \frac&\frac\\ & 1 & 0 \\ \end The trapezoidal rule is a
collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
(as discussed in that article). All collocation methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are collocation methods. The
Gauss–Legendre method In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation ...
s form a family of collocation methods based on
Gauss quadrature In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for more ...
. A Gauss–Legendre method with ''s'' stages has order 2''s'' (thus, methods with arbitrarily high order can be constructed). The method with two stages (and thus order four) has Butcher tableau: : \begin \frac12 - \frac16 \sqrt3 & \frac14 & \frac14 - \frac16 \sqrt3 \\ \frac12 + \frac16 \sqrt3 & \frac14 + \frac16 \sqrt3 & \frac14 \\ \hline & \frac12 & \frac12 \\ & \frac12+\frac12 \sqrt3 & \frac12-\frac12 \sqrt3 \end


Stability

The advantage of implicit Runge–Kutta methods over explicit ones is their greater stability, especially when applied to
stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
s. Consider the linear test equation y' = \lambda y . A Runge–Kutta method applied to this equation reduces to the iteration y_ = r(h\lambda) \, y_n , with ''r'' given by : r(z) = 1 + z b^T (I-zA)^ e = \frac, where ''e'' stands for the vector of ones. The function ''r'' is called the ''stability function''. It follows from the formula that ''r'' is the quotient of two polynomials of degree ''s'' if the method has ''s'' stages. Explicit methods have a strictly lower triangular matrix ''A'', which implies that det(''I'' − ''zA'') = 1 and that the stability function is a polynomial. The numerical solution to the linear test equation decays to zero if , ''r''(''z'') , < 1 with ''z'' = ''h''λ. The set of such ''z'' is called the ''domain of absolute stability''. In particular, the method is said to be absolute stable if all ''z'' with Re(''z'') < 0 are in the domain of absolute stability. The stability function of an explicit Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-stable. If the method has order ''p'', then the stability function satisfies r(z) = \textrm^z + O(z^) as z \to 0 . Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as
Padé approximant In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is ap ...
s. A Padé approximant with numerator of degree ''m'' and denominator of degree ''n'' is A-stable if and only if ''m'' ≤ ''n'' ≤ ''m'' + 2. The Gauss–Legendre method with ''s'' stages has order 2''s'', so its stability function is the Padé approximant with ''m'' = ''n'' = ''s''. It follows that the method is A-stable. This shows that A-stable Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable
linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s cannot exceed two.


B-stability

The ''A-stability'' concept for the solution of differential equations is related to the linear autonomous equation y'=\lambda y. Dahlquist proposed the investigation of stability of numerical schemes when applied to nonlinear systems that satisfy a monotonicity condition. The corresponding concepts were defined as ''G-stability'' for multistep methods (and the related one-leg methods) and ''B-stability'' (Butcher, 1975) for Runge–Kutta methods. A Runge–Kutta method applied to the non-linear system y'=f(y), which verifies \langle f(y)-f(z),\ y-z \rangle<0, is called ''B-stable'', if this condition implies \, y_-z_\, \leq\, y_-z_\, for two numerical solutions. Let B, M and Q be three s\times s matrices defined by : B=\operatorname(b_1,b_2,\ldots,b_s),\, M=BA+A^TB-bb^T,\, Q=BA^+A^B-A^bb^TA^. A Runge–Kutta method is said to be ''algebraically stable'' if the matrices B and M are both non-negative definite. A sufficient condition for ''B-stability'' is: B and Q are non-negative definite.


Derivation of the Runge–Kutta fourth-order method

In general a Runge–Kutta method of order s can be written as: :y_ = y_t + h \cdot \sum_^s a_i k_i +\mathcal(h^), where: :k_i =y_t + h \cdot \sum_^s \beta_ f\left(k_j,\ t_n + \alpha_i h \right) are increments obtained evaluating the derivatives of y_t at the i-th order. We develop the derivation for the Runge–Kutta fourth-order method using the general formula with s=4 evaluated, as explained above, at the starting point, the midpoint and the end point of any interval (t,\ t +h); thus, we choose: : \begin &\alpha_i & &\beta_ \\ \alpha_1 &= 0 & \beta_ &= \frac \\ \alpha_2 &= \frac & \beta_ &= \frac \\ \alpha_3 &= \frac & \beta_ &= 1 \\ \alpha_4 &= 1 & &\\ \end and \beta_ = 0 otherwise. We begin by defining the following quantities: :\begin y^1_ &= y_t + hf\left(y_t,\ t\right) \\ y^2_ &= y_t + hf\left(y^1_,\ t+\frac\right) \\ y^3_ &= y_t + hf\left(y^2_,\ t+\frac\right) \end where y^1_ = \dfrac and y^2_ = \dfrac. If we define: :\begin k_1 &= f(y_t,\ t) \\ k_2 &= f\left(y^1_,\ t + \frac\right) = f\left(y_t + \frac k_1,\ t + \frac\right) \\ k_3 &= f\left(y^2_,\ t + \frac\right) = f\left(y_t + \frac k_2,\ t + \frac\right) \\ k_4 &= f\left(y^3_,\ t + h\right) = f\left(y_t + h k_3,\ t + h\right) \end and for the previous relations we can show that the following equalities hold up to \mathcal(h^2): :\begin k_2 &= f\left(y^1_,\ t + \frac\right) = f\left(y_t + \frac k_1,\ t + \frac\right) \\ &= f\left(y_t,\ t\right) + \frac \fracf\left(y_t,\ t\right) \\ k_3 &= f\left(y^2_,\ t + \frac\right) = f\left(y_t + \frac f\left(y_t + \frac k_1,\ t + \frac\right),\ t + \frac\right) \\ &= f\left(y_t,\ t\right) + \frac \frac \left f\left(y_t,\ t\right) + \frac \fracf\left(y_t,\ t\right) \right\\ k_4 &= f\left(y^3_,\ t + h\right) = f\left(y_t + h f\left(y_t + \frac k_2,\ t + \frac\right),\ t + h\right) \\ &= f\left(y_t + h f\left(y_t + \frac f\left(y_t + \frac f\left(y_t,\ t\right),\ t + \frac\right),\ t + \frac\right),\ t + h\right) \\ &= f\left(y_t,\ t\right) + h \frac \left f\left(y_t,\_t\right)_+_\frac_\frac\left[_f\left(y_t,\_t\right)_+_\frac_\fracf\left(y_t,\_t\right)_\rightright.html" ;"title="f\left(y_t,\ t\right) + \frac \fracf\left(y_t,\ t\right) \right">f\left(y_t,\ t\right) + \frac \frac\left[ f\left(y_t,\ t\right) + \frac \fracf\left(y_t,\ t\right) \rightright">f\left(y_t,\ t\right) + \frac \fracf\left(y_t,\ t\right) \right">f\left(y_t,\ t\right) + \frac \frac\left[ f\left(y_t,\ t\right) + \frac \fracf\left(y_t,\ t\right) \rightright\end where: :\frac f(y_t,\ t) = \frac f(y_t,\ t) \dot y_t + \frac f(y_t,\ t) = f_y(y_t,\ t) \dot y + f_t(y_t,\ t) := \ddot y_t is the total derivative of f with respect to time. If we now express the general formula using what we just derived we obtain: :\begin y_ = & y_t + h \left\lbrace a \cdot f(y_t,\ t) + b \cdot \left[ f(y_t,\ t) + \frac \fracf(y_t,\ t) \right] \right.+ \\ & + c \cdot \left[ f(y_t,\ t) + \frac \frac \left[ f\left(y_t,\ t\right) + \frac \fracf(y_t,\ t) \right] \right] + \\ &+ d \cdot \left[f(y_t,\ t) + h \frac \left[ f(y_t,\ t) + \frac \frac\left f(y_t,\ t) + \left. \frac \fracf(y_t,\ t) \rightright]\right]\right\rbrace + \mathcal(h^5) \\ = & y_t + a \cdot h f_t + b \cdot h f_t + b \cdot \frac \frac + c \cdot h f_t+ c \cdot \frac \frac + \\ &+ c \cdot \frac \frac + d \cdot h f_t + d \cdot h^2 \frac + d \cdot \frac \frac + d \cdot \frac \frac + \mathcal(h^5) \end and comparing this with the
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
of y_ around t: :\begin y_ &= y_t + h \dot y_t + \frac \ddot y_t + \frac y^_t + \frac y^_t + \mathcal(h^5) = \\ &= y_t + h f(y_t,\ t) + \frac \fracf(y_t,\ t) + \frac \fracf(y_t,\ t) + \frac \fracf(y_t,\ t) \end we obtain a system of constraints on the coefficients: : \begin & a + b + c + d = 1 \\ pt & \frac b + \frac c + d = \frac \\ pt & \frac c + \frac d = \frac \\ pt & \frac d = \frac \end which when solved gives a = \frac, b = \frac, c = \frac, d = \frac as stated above.


See also

*
Euler's method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit met ...
*
List of Runge–Kutta methods Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation :\frac = f(t, y). Explicit Runge–Kutta methods take the form :\begin y_ &= y_n + h \sum_^s b_i k_i \\ k_1 &= f(t_n, y_n), \\ k_2 &= f(t_n+c_2h ...
*
Numerical methods for ordinary differential equations Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also ...
* Runge–Kutta method (SDE) *
General linear methods General linear methods (GLMs) are a large class of numerical methods used to obtain numerical solutions to ordinary differential equations. They include multistage Runge–Kutta methods that use intermediate collocation points, as well as linea ...
* Lie group integrator


Notes


References

* . * . * . * . * . * . * . * . * . * (see Chapter 6). * . * . * . * * . * . Also
Section 17.2. Adaptive Stepsize Control for Runge-Kutta
* . * . * . * advance discrete maths ignou reference book (code- mcs033) * John C. Butcher: "B-Series : Algebraic Analysis of Numerical Methods", Springer(SSCM, volume 55), (April, 2021).


External links

*


Tracker Component Library Implementation in Matlab
— Implements 32 embedded Runge Kutta algorithms in RungeKStep, 24 embedded Runge-Kutta Nyström algorithms in RungeKNystroemSStep and 4 general Runge-Kutta Nyström algorithms in RungeKNystroemGStep. {{DEFAULTSORT:Runge-Kutta methods Numerical differential equations Numerical analysis