List Of Runge–Kutta Methods
   HOME
*





List Of Runge–Kutta Methods
Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation :\frac = f(t, y). Explicit Runge–Kutta methods take the form :\begin y_ &= y_n + h \sum_^s b_i k_i \\ k_1 &= f(t_n, y_n), \\ k_2 &= f(t_n+c_2h, y_n+h(a_k_1)), \\ k_3 &= f(t_n+c_3h, y_n+h(a_k_1+a_k_2)), \\ &\;\;\vdots \\ k_i &= f\left(t_n + c_i h, y_n + h \sum_^ a_ k_j\right). \end Stages for implicit methods of s stages take the more general form, with the solution to be found over all s :k_i = f\left(t_n + c_i h, y_n + h \sum_^ a_ k_j\right). Each method listed on this page is defined by its Butcher tableau, which puts the coefficients of the method in a table as follows: : \begin c_1 & a_ & a_& \dots & a_\\ c_2 & a_ & a_& \dots & a_\\ \vdots & \vdots & \vdots& \ddots& \vdots\\ c_s & a_ & a_& \dots & a_ \\ \hline & b_1 & b_2 & \dots & b_s\\ \end For adaptive and implicit methods, the Butcher tableau is extended to give values of b^*_i, and the estim ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Runge–Kutta Methods
In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta. The Runge–Kutta method The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an initial value problem be specified as follows: : \frac = f(t, y), \quad y(t_0) = y_0. Here y is an unknown function (scalar or vector) of time t, which we would like to approximate; we are told that \frac, the rate at which y changes, is a function of t and of y itself. At the initial time t_0 the corresponding y value is y_0. The function f and the initial conditions t_0, y_0 are given. Now we pick a step-size ''h'' > 0 and define: ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Backward Euler Method
In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but differs in that it is an implicit method. The backward Euler method has error of order one in time. Description Consider the ordinary differential equation : \frac = f(t,y) with initial value y(t_0) = y_0. Here the function f and the initial data t_0 and y_0 are known; the function y depends on the real variable t and is unknown. A numerical method produces a sequence y_0, y_1, y_2, \ldots such that y_k approximates y(t_0+kh) , where h is called the step size. The backward Euler method computes the approximations using : y_ = y_k + h f(t_, y_). This differs from the (forward) Euler method in that the forward method uses f(t_k, y_k) in place of f(t_, y_). The backward Euler method is an implicit method: the new approxima ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Numerical Differential Equations
Numerical may refer to: * Number * Numerical digit * Numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
{{disambig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Springer-Verlag
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing. Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City. History Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second largest academic publisher with 65 staff in 1872.Chronology
". Springer Science+Business Media.
In 1964, Springer expanded its business internationally, o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

University Of Iowa
The University of Iowa (UI, U of I, UIowa, or simply Iowa) is a public university, public research university in Iowa City, Iowa, United States. Founded in 1847, it is the oldest and largest university in the state. The University of Iowa is organized into 12 colleges offering more than 200 areas of study and seven professional degrees. On an urban 1,880-acre campus on the banks of the Iowa River, the University of Iowa is Carnegie Classification of Institutions of Higher Education, classified among "R1: Doctoral Universities – Very high research activity". In fiscal year 2021, research expenditures at Iowa totaled $818 million. The university is best known for its programs in health care, law, and the fine arts, with programs ranking among the top 25 nationally in those areas. The university was the original developer of the Master of Fine Arts degree and it operates the Iowa Writers' Workshop, which has produced 17 of the university's 46 Pulitzer Prize winners. Iowa is a mem ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Discontinuous Collocation Method
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. This article describes the classification of discontinuities in the simplest case of functions of a single real variable taking real values. The oscillation of a function at a point quantifies these discontinuities as follows: * in a removable discontinuity, the distance that the value of the function is off by is the oscillation; * in a jump discontinuity, the size of the jump is the oscillation (assuming that the value ''at'' the point lies between these limits of the two sides); * in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant. A special case is if the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Trapezoidal Rule (differential Equations)
In numerical analysis and scientific computing, the trapezoidal rule is a numerical method to solve ordinary differential equations derived from the trapezoidal rule for computing integrals. The trapezoidal rule is an implicit second-order method, which can be considered as both a Runge–Kutta method and a linear multistep method. Method Suppose that we want to solve the differential equation : y' = f(t,y). The trapezoidal rule is given by the formula : y_ = y_n + \tfrac12 h \Big( f(t_n,y_n) + f(t_,y_) \Big), where h = t_ - t_n is the step size. This is an implicit method: the value y_ appears on both sides of the equation, and to actually calculate it, we have to solve an equation which will usually be nonlinear. One possible method for solving this equation is Newton's method. We can use the Euler method to get a fairly good estimate for the solution, which can be used as the initial guess of Newton's method. Cutting short, using only the guess from Eulers method is e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Collocation Method
In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually polynomials up to a certain degree) and a number of points in the domain (called ''collocation points''), and to select that solution which satisfies the given equation at the collocation points. Ordinary differential equations Suppose that the ordinary differential equation : y'(t) = f(t,y(t)), \quad y(t_0)=y_0, is to be solved over the interval _0,t_0+c_k h/math>. Choose c_k from 0 ≤ ''c''1< ''c''2< … < ''c''''n'' ≤ 1. The corresponding (polynomial) collocation method approximates the solution ''y'' by the polynomial ''p'' of degree ''n'' which satisfies the initial condition p(t_0) = y_0, and the differential equation p'(t_k) = f(t_k,p(t_k)) at all ''collocation points' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gaussian Quadrature
In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for more on quadrature rules.) An -point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree or less by a suitable choice of the nodes and weights for . The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in 1826. The most common domain of integration for such a rule is taken as , so the rule is stated as :\int_^1 f(x)\,dx \approx \sum_^n w_i f(x_i), which is exact for polynomials of degree or less. This exact rule is known as the Gauss-Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if is well-approximated by a polynomial of degree or less on . The Gaus ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Rehuel Lobatto
Rehuel Lobatto (6 June 1797 – 9 February 1866 ) was a Dutch mathematician. The Gauss-Lobatto quadrature method is named after him, as are his variants on the Runge–Kutta methods for solving ODEs, and the Lobatto polynomials. He was the author of a great number of articles in scientific periodicals, as well as various schoolbooks. Lobatto was born in Amsterdam to a Portuguese Jewish family. As a schoolboy Lobatto already displayed remarkable talent for mathematics. Gotthard Deutsch, E. Slijper (1906)"LOBATTO, REHUEL" ''The Jewish Encyclopedia''. He studied mathematics under Jean Henri van Swinden at the Athenaeum Illustre of Amsterdam, earning his BA in 1812; and then with Adolphe Quetelet (coediting a volume o"Correspondance Mathématique et Physique". Working for the Dutch government - initially for the Ministry of the Interior - he became secretary of a statistical commission in 1831. From 1826 till 1849 he was editor of the '' "Jaarboekje van Lobatto"'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gauss–Legendre Quadrature
In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval , the rule takes the form: :\int_^1 f(x)\,dx \approx \sum_^n w_i f(x_i) where * ''n'' is the number of sample points used, * ''w''''i'' are quadrature weights, and * ''x''''i'' are the roots of the ''n''th Legendre polynomial. This choice of quadrature weights ''w''''i'' and quadrature nodes ''x''''i'' is the unique choice that allows the quadrature rule to integrate degree polynomials exactly. Many algorithms have been developed for computing Gauss–Legendre quadrature rules. The Golub–Welsch algorithm presented in 1969 reduces the computation of the nodes and weights to an eigenvalue problem which is solved by the QR algorithm. This algorithm was popular, but significantly more efficient algorithms exist. Algorithms based on the Newton–Raphson method are able to compute quadrature rules for significa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Crank–Nicolson Method
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century. For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step \Delta t times the thermal diffusivity to the square of space step, \Delta x^2, is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations. The method ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]