Laguerre's Method
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, Laguerre's method is a
root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
tailored to
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exa ...
s. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to ''some'' root of the polynomial, no matter what initial guess is chosen. However, for
computer A computer is a machine that can be programmed to Execution (computing), carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as C ...
computation, more efficient methods are known, with which it is guaranteed to find all roots (see ) or all real roots (see
Real-root isolation In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one (and only one) real root of the polynomial, and ...
). This method is named in honour of
Edmond Laguerre Edmond Nicolas Laguerre (9 April 1834, Bar-le-Duc – 14 August 1886, Bar-le-Duc) was a French mathematician and a member of the Académie des sciences (1885). His main works were in the areas of geometry and complex analysis. He also investigate ...
, a French mathematician.


Definition

The algorithm of the Laguerre method to find one root of a polynomial of degree is: * Choose an initial guess * For ** If p(x_k) is very small, exit the loop ** Calculate G = \frac ** Calculate H = G^2 - \frac ** Calculate a = \frac , where the sign is chosen to give the denominator with the larger absolute value, to avoid catastrophic cancellation. ** Set x_ = x_k - a * Repeat until ''a'' is small enough or if the maximum number of iterations has been reached. If a root has been found, the corresponding linear factor can be removed from ''p''. This deflation step reduces the degree of the polynomial by one, so that eventually, approximations for all roots of ''p'' can be found. Note however that deflation can lead to approximate factors that differ significantly from the corresponding exact factors. This error is least if the roots are found in the order of increasing magnitude.


Derivation

The
fundamental theorem of algebra The fundamental theorem of algebra, also known as d'Alembert's theorem, or the d'Alembert–Gauss theorem, states that every non- constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomia ...
states that every ''n''th degree polynomial p can be written in the form :p(x) = C(x - x_1)(x - x_2)\cdots(x - x_n), such that x_k where (k=1, 2,..., n) are the roots of the polynomial. If we take the
natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
of both sides, we find that :\ln , p(x), = \ln , C, + \ln , x - x_1, + \ln , x - x_2, + \cdots + \ln , x - x_n, . Denote the derivative by :G = \frac \ln , p(x), = \frac + \frac + \cdots + \frac, and the negated second derivative by : H = -\frac \ln , p(x), = \frac + \frac + \cdots + \frac. We then make what Acton calls a 'drastic set of assumptions', that the root we are looking for, say, x_1 is a certain distance away from our guess x, and all the other roots are clustered together some distance away. If we denote these distances by : a = x - x_1 \, and : b = x - x_i,\quad i = 2, 3,\ldots, n then our equation for G may be written as : G = \frac + \frac and the expression for H becomes : H = \frac + \frac. Solving these equations for a, we find that : a = \frac , where the square root of a complex number is chosen to produce larger absolute value of the denominator, or equivalently, to satisfy: :\operatorname\,(\overline \sqrt\,)>0, where denotes real part of a complex number, and is the complex conjugate of ; or : a = \frac\cdot \left( \frac1n+\fracn\,\sqrt \right)^ , where the square root of a complex number is chosen to have a non-negative real part. For small values of this formula differs from the offset of the third order
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
by an error of , so convergence close to a root will be cubic as well. Note that, even if the 'drastic set of assumptions' does not work for some particular polynomial , can be transformed into a related polynomial for which the assumptions are correct, e.g. by shifting the origin towards a suitable complex number , giving , to give distinct roots distinct magnitudes if necessary (which it will be if some roots are complex conjugates), and then getting from by repeatedly applying the root squaring transformation used in
Graeffe's method In mathematics, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 ...
enough times to make the smaller roots significantly smaller than the largest root (and so, clustered in comparison); the approximate root from Graeffe's method can then be used to start the new iteration for Laguerre's method on . An approximate root for may then be obtained straightforwardly from that for . If we make the stronger assumption that the terms in corresponding to the roots are negligibly small in comparison to the term corresponding to the root , this leads to
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
.


Properties

If is a simple root of the polynomial , then Laguerre's method converges cubically whenever the initial guess is close enough to the root . On the other hand, if is a multiple root then the convergence is only linear. This is obtained with the penalty of calculating values for the polynomial and its first and second derivatives at each stage of the iteration. A major advantage of Laguerre's method is that it is almost guaranteed to converge to ''some'' root of the polynomial ''no matter where the initial approximation is chosen''. This is in contrast to other methods such as the
Newton–Raphson method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
which may fail to converge for poorly chosen initial guesses. It may even converge to a complex root of the polynomial, because of the square root being taken in the calculation of above may be of a negative number. This may be considered an advantage or a liability depending on the application to which the method is being used. Empirical evidence has shown that convergence failure is extremely rare, making this a good candidate for a general purpose polynomial root finding algorithm. However, given the fairly limited theoretical understanding of the algorithm, many numerical analysts are hesitant to use it as such, and prefer better understood methods such as the
Jenkins–Traub algorithm The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with comple ...
, for which more solid theory has been developed. Nevertheless, the algorithm is fairly simple to use compared to these other "sure-fire" methods, easy enough to be used by hand or with the aid of a pocket calculator when an automatic computer is unavailable. The speed at which the method converges means that one is only very rarely required to compute more than a few iterations to get high accuracy.


References

* * * * * * {{Root-finding algorithms Root-finding algorithms