Dandelin–Gräffe Method
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by
Germinal Pierre Dandelin Germinal Pierre Dandelin (12 April 1794 – 15 February 1847) was a French mathematician, soldier, and professor of engineering. Life He was born near Paris to a French father and Belgian mother, studying first at Ghent then returning to Par ...
in 1826 and
Lobachevsky Nikolai Ivanovich Lobachevsky ( rus, Никола́й Ива́нович Лобаче́вский, p=nʲikɐˈlaj ɪˈvanəvʲɪtɕ ləbɐˈtɕɛfskʲɪj, a=Ru-Nikolai_Ivanovich_Lobachevsky.ogg; – ) was a Russian mathematician and geometer, kn ...
in 1834. In 1837
Karl Heinrich Gräffe Karl Heinrich Gräffe (7 November 1799 – 2 December 1873) was a German mathematician, who was professor at University of Zurich. Life and work Gräffe's father migrated to North America, leaving the family business of jewelry in his hands. E ...
also discovered the principal idea of the method. The method separates the roots of a polynomial by squaring them repeatedly. This squaring of the roots is done implicitly, that is, only working on the coefficients of the polynomial. Finally,
Viète's formulas In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. They are named after François Viète (more commonly referred to by the Latinised form of his name, "Franciscus Vieta"). Basic formulas ...
are used in order to approximate the roots.


Dandelin–Graeffe iteration

Let be a polynomial of degree :p(x) = (x-x_1)\cdots(x-x_n). Then :p(-x) = (-1)^n (x+x_1)\cdots(x+x_n). Let be the polynomial which has the squares x_1^2, \cdots, x_n^2 as its roots, :q(x)= \left (x-x_1^2 \right )\cdots \left (x-x_n^2 \right ). Then we can write: :\begin q(x^2) & = \left (x^2-x_1^2 \right )\cdots \left (x^2-x_n^2 \right ) \\ & = (x-x_1)(x+x_1) \cdots (x-x_n) (x+x_n) \\ & = \left \ \times \left \ \\ & = p(x) \times \left \ \\ & = p(x) \times \left \ \\ & = (-1)^n p(x) p(-x) \end can now be computed by algebraic operations on the coefficients of the polynomial alone. Let: :\begin p(x) &= x^n+a_1x^+\cdots+a_x+a_n \\ q(x) &= x^n+b_1x^+\cdots+b_x+b_n \end then the coefficients are related by :b_k=(-1)^k a_k^2 + 2\sum_^(-1)^j\,a_ja_, \qquad a_0=b_0=1. Graeffe observed that if one separates into its odd and even parts: :p(x)=p_e \left (x^2 \right )+x p_o\left (x^2 \right ), then one obtains a simplified algebraic expression for : :q(x)=(-1)^n \left (p_e(x)^2-x p_o(x)^2 \right ). This expression involves the squaring of two polynomials of only half the degree, and is therefore used in most implementations of the method. Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating ''k'' times gives a polynomial of degree : :q^k(y) = y^n + _1\,y^ + \cdots + _\,y + _n \, with roots :y_1=x_1^,\,y_2=x_2^,\,\dots,\,y_n=x_n^. If the magnitudes of the roots of the original polynomial were separated by some factor \rho>1, that is, , x_k, \ge\rho , x_, , then the roots of the ''k''-th iterate are separated by a fast growing factor :\rho^\ge 1+2^k(\rho-1).


Classical Graeffe's method

Next the
Vieta relations In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. They are named after François Viète (more commonly referred to by the Latinised form of his name, "Franciscus Vieta"). Basic formulas ...
are used :\begin a^k_ &= -(y_1+y_2+\cdots+y_n)\\ a^k_ &= y_1 y_2 + y_1 y_3+\cdots+y_ y_n\\ &\;\vdots\\ a^k_ &= (-1)^n(y_1 y_2 \cdots y_n). \end If the roots x_1,\dots,x_n are sufficiently separated, say by a factor \rho>1, , x_m, \ge \rho, x_, , then the iterated powers y_1,y_2,...,y_n of the roots are separated by the factor \rho^, which quickly becomes very big. The coefficients of the iterated polynomial can then be approximated by their leading term, :a^k_ \approx -y_1 :a^k_ \approx y_1 y_2 and so on, implying : y_1\approx -a^k_,\; y_2\approx -a^k_/a^k_, \;\dots\; y_n\approx -a^k_/a^k_. Finally, logarithms are used in order to find the absolute values of the roots of the original polynomial. These magnitudes alone are already useful to generate meaningful starting points for other root-finding methods. To also obtain the angle of these roots, a multitude of methods has been proposed, the most simple one being to successively compute the square root of a (possibly complex) root of q^m(y), ''m'' ranging from ''k'' to 1, and testing which of the two sign variants is a root of q^(x). Before continuing to the roots of q^(x), it might be necessary to numerically improve the accuracy of the root approximations for q^(x), for instance by
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and coefficients, and roots with higher multiplicity. For instance, it has been observed that for a root x_=x_=\dots=x_ with multiplicity ''d'', the fractions :\left, \frac\ tend to \binom for i=0,1,\dots,d. This allows to estimate the multiplicity structure of the set of roots. From a numerical point of view, this method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second, but minor concern is that many different polynomials lead to the same Graeffe iterates.


Tangential Graeffe method

This method replaces the numbers by truncated
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''an'' represents the coefficient of the ''n''th term and ''c'' is a const ...
of degree 1, also known as
dual numbers In algebra, the dual numbers are a hypercomplex number system first introduced in the 19th century. They are expressions of the form , where and are real numbers, and is a symbol taken to satisfy \varepsilon^2 = 0 with \varepsilon\neq 0. Du ...
. Symbolically, this is achieved by introducing an "algebraic infinitesimal" \varepsilon with the defining property \varepsilon^2=0. Then the polynomial p(x+\varepsilon)=p(x)+\varepsilon\,p'(x) has roots x_m-\varepsilon, with powers :::(x_m-\varepsilon)^=x_m^-\varepsilon\,\,x_m^=y_m+\varepsilon\,\dot y_m. Thus the value of x_m is easily obtained as fraction x_m=-\tfrac. This kind of computation with infinitesimals is easy to implement analogous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration.


Renormalization

Every polynomial can be scaled in domain and range such that in the resulting polynomial the first and the last coefficient have size one. If the size of the inner coefficients is bounded by ''M'', then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by nM^2. After ''k'' stages one gets the bound n^M^ for the inner coefficients. To overcome the limit posed by the growth of the powers, Malajovich–Zubelli propose to represent coefficients and intermediate results in the ''k''th stage of the algorithm by a scaled polar form :::c=\alpha\,e^, where \alpha=\frac is a complex number of unit length and r=-2^\log, c, is a positive real. Splitting off the power 2^k in the exponent reduces the absolute value of ''c'' to the corresponding dyadic root. Since this preserves the magnitude of the (representation of the) initial coefficients, this process was named renormalization. Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization c_3=c_1+c_2=, c_1, \cdot\left(\alpha_1+\alpha_2\tfrac\right), where c_1 is chosen as the larger of both numbers, that is, r_1. Thus :::\alpha_3=\tfrac and r_3=r_1+2^\,\log with s=\alpha_1+\alpha_2\,e^. The coefficients a_0,a_1,\dots,a_n of the final stage ''k'' of the Graeffe iteration, for some reasonably large value of ''k'', are represented by pairs (\alpha_m,r_m), m=0,\dots,n. By identifying the corners of the convex envelope of the point set \ one can determine the multiplicities of the roots of the polynomial. Combining this renormalization with the tangent iteration one can extract directly from the coefficients at the corners of the envelope the roots of the original polynomial.


See also

*
Root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...


References

* * {{DEFAULTSORT:Graeffe's Method Root-finding algorithms