Gauss–Legendre Method
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
and
scientific computing Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and s ...
, the Gauss–Legendre methods are a family of
numerical methods for ordinary differential equations Numerical methods for ordinary differential equations are methods used to find Numerical analysis, numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although ...
. Gauss–Legendre methods are implicit
Runge–Kutta methods In numerical analysis, the Runge–Kutta methods ( ) are a family of Explicit and implicit methods, implicit and explicit iterative methods, List of Runge–Kutta methods, which include the Euler method, used in temporal discretization for the a ...
. More specifically, they are
collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
s based on the points of
Gauss–Legendre quadrature In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval , the rule takes the form: :\int_^1 f(x)\,dx \approx \sum_^n w_i f(x_i) : ...
. The Gauss–Legendre method based on ''s'' points has order 2''s''. All Gauss–Legendre methods are A-stable. The Gauss–Legendre method of order two is the implicit midpoint rule. Its
Butcher tableau A butcher is a person who may slaughter animals, dress their flesh, sell their meat, or participate within any combination of these three tasks. They may prepare standard cuts of meat and poultry for sale in retail or wholesale food establishme ...
is: : The Gauss–Legendre method of order four has Butcher tableau: : The Gauss–Legendre method of order six has Butcher tableau: : The computational cost of higher-order Gauss–Legendre methods is usually excessive, and thus, they are rarely used.


Intuition

Gauss-Legendre Runge-Kutta (GLRK) methods solve an
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable (mathematics), variable. As with any other DE, its unknown(s) consists of one (or more) Function (mathematic ...
\dot = f(x) with x(0) = x_0. The distinguishing feature of GLRK is the estimation of x(h) - x_0 = \int_0^h f( x(t) ) \, dt with
Gaussian quadrature In numerical analysis, an -point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree or less by a suitable choice of the nodes and weights for . Th ...
. x(h) = x(0) + \frac \sum_^\ell w_i k_i + O(h^), where k_i = f( x( h c_i) ) are the sampled velocities, w_i are the quadrature weights, c_i = \frac h (1+r_i) are the abscissas, and r_i are the roots P_\ell(r_i) = 0 of the Legendre polynomial of degree \ell. A further approximation is needed, as k_i is still impossible to evaluate. To maintain
truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. The term truncation comes from the fact that these simplifications often involve the truncation of an infinite series expa ...
of order O(h^), we only need k_i to order O(h^). The Runge-Kutta implicit definition k_i = f is invoked to accomplish this. This is an implicit constraint that must be solved by a root finding algorithm like
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
. The values of the Runge-Kutta parameters a_ can be determined from a
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
expansion in h.


Practical example

The Gauss-Legendre methods are implicit, so in general they cannot be applied exactly. Instead one makes an educated guess of k_i , and then uses
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
to converge arbitrarily close to the true solution. Below is a
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
function which implements the Gauss-Legendre method of order four. % starting point x = 10.5440; 4.1124; 35.8233 dt = 0.01; N = 10000; x_series = for i = 1:N x = gauss_step(x, @lorenz_dynamics, dt, 1e-7, 1, 100); x_series = _series x end plot3( x_series(1,:), x_series(2,:), x_series(3,:) ); set(gca,'xtick',[],'ytick',[],'ztick',[]); title('Lorenz Attractor'); return; function [td, j] = lorenz_dynamics(state) % return a time derivative and a Jacobian of that time derivative x = state(1); y = state(2); z = state(3); sigma = 10; beta = 8/3; rho = 28; td = igma*(y-x); x*(rho-z)-y; x*y-beta*z j = sigma, sigma, 0; rho-z, -1, -x; y, x, -beta end function x_next = gauss_step( x, dynamics, dt, threshold, damping, max_iterations ) ,~= size(x); sq3 = sqrt(3); if damping > 1 , , damping <= 0 error('damping should be between 0 and 1.') end % Use explicit Euler steps as initial guesses ,~= dynamics(x); x1_guess = x + (1/2-sq3/6)*dt*k; x2_guess = x + (1/2+sq3/6)*dt*k; 1,~= dynamics(x1_guess); 2,~= dynamics(x2_guess); a11 = 1/4; a12 = 1/4 - sq3/6; a21 = 1/4 + sq3/6; a22 = 1/4; error = @(k1, k2) 1 - dynamics(x+(a11*k1+a12*k2)*dt); k2 - dynamics(x+(a21*k1+a22*k2)*dt) er = error(k1, k2); iteration=1; while (norm(er) > threshold && iteration < max_iterations) fprintf('Newton iteration %d: error is %f.\n', iteration, norm(er) ); iteration = iteration + 1; , j1= dynamics(x+(a11*k1+a12*k2)*dt); , j2= dynamics(x+(a21*k1+a22*k2)*dt); j = ye(d) - dt*a11*j1, -dt*a12*j1; -dt*a21*j2, eye(d) - dt*a22*j2 k_next = 1;k2- damping * linsolve(j, er); k1 = k_next(1:d); k2 = k_next(d+(1:d)); er = error(k1, k2); end if norm(er) > threshold error('Newton did not converge by %d iterations.', max_iterations); end x_next = x + dt / 2 * (k1 + k2); end This algorithm is surprisingly cheap. The error in k_i can fall below 10^ in as few as 2 Newton steps. The only extra work compared to explicit Runge-Kutta methods is the computation of the Jacobian.


Time-symmetric variants

At the cost of adding an additional implicit relation, these methods can be adapted to have time
reversal symmetry The reversal symmetry criterion is a voting system criterion which says that if every voter's opinions on each of the candidates is perfectly reversed (i.e. they rank candidates in order from worst to best), the outcome of the election should b ...
. In these methods, the averaged position (x_f+x_i)/2 is used in computing k_i instead of just the initial position x_i in standard Runge-Kutta methods. The method of order 2 is just an implicit midpoint method. : k_1 = f\left(\frac\right) : x_f = x_i + h k_1 The method of order 4 with 2 stages is as follows. : k_1 = f\left( \frac - \frac h k_2\right) : k_2 = f\left( \frac + \frac h k_1\right) : x_f = x_i + \frac(k_1 + k_2) The method of order 6 with 3 stages is as follows. : k_1 = f\left( \frac - \frac h k_2 - \frac h k_3 \right) : k_2 = f\left( \frac + \frac h k_1 - \frac h k_3 \right) : k_3 = f\left( \frac + \frac h k_1 + \frac h k_2 \right) : x_f = x_i + \frac( 5 k_1 + 8k_2 + 5k_3)


Notes


References

* . {{DEFAULTSORT:Gauss-Legendre method Runge–Kutta methods