Householder's Method
   HOME

TheInfoList



OR:

In mathematics, and more specifically in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, Householder's methods are a class of
root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
s that are used for functions of one real variable with continuous derivatives up to some order . Each of these methods is characterized by the number , which is known as the ''order'' of the method. The algorithm is iterative and has a
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of . These methods are named after the American mathematician
Alston Scott Householder Alston Scott Householder (5 May 1904 – 4 July 1993) was an American mathematician who specialized in mathematical biology and numerical analysis. He is the inventor of the Householder transformation and of Householder's method. Career Hous ...
.


Method

Householder's method is a numerical algorithm for solving the nonlinear equation . In this case, the function has to be a function of one real variable. The method consists of a sequence of iterations :x_ = x_n + d\; \frac beginning with an initial guess . If is a times continuously differentiable function and is a zero of but not of its derivative, then, in a neighborhood of , the iterates satisfy: :, x_ - a , \le K \cdot ^ , for some K > 0.\! This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence has order . Despite their order of convergence, these methods are not widely used because the gain in precision is not commensurate with the rise in effort for large . The ''Ostrowski index'' expresses the error reduction in the number of function evaluations instead of the iteration count. * For polynomials, the evaluation of the first derivatives of at using the
Horner method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by H ...
has an effort of polynomial evaluations. Since evaluations over iterations give an error exponent of , the exponent for one function evaluation is \sqrt +1/math>, numerically , , , for , and falling after that. By this criterion, the case (
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
) is the optimal value of . * For general functions the derivative evaluation using the Taylor arithmetic of
automatic differentiation In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation, computational differentiation, auto-differentiation, or simply autodiff, is a set of techniques to evaluate the derivative of a function s ...
requires the equivalent of function evaluations. One function evaluation thus reduces the error by an exponent of \sqrt frac/math>, which is \sqrt approx 1.2599 for Newton's method, \sqrt approx 1.2009 for Halley's method and falling towards 1 or linear convergence for the higher order methods.


Motivation

Suppose is analytic in a neighborhood of and . Then has a
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
at and its constant term is zero. Because this constant term is zero, the function will have a Taylor series at and, when , its constant term will not be zero. Because that constant term is not zero, it follows that the reciprocal has a Taylor series at , which we will write as \sum_^ \frac and its constant term will not be zero. Using that Taylor series we can write \frac = \frac + \sum_^ \frac\,. We can compute its -th derivative: \left(\frac\right)^ = \frac + \sum_^ \frac = \frac \left(1 + \frac\sum_^ \frac\right) = \frac \left(1 + \mathcal\left((x-a)^\right)\right) \,. Conveniently, the terms for have vanished. We thus get that the ratio d~\frac = d~\frac (x-a) \left( \frac \right) = -(x-a) \left( 1 + \mathcal\left((x-a)^\right) \right)\,. If is the zero of that is closest to then the second factor goes to 1 as goes to infinity and x + d~\frac goes to . This suggests that Householder's iteration might be a good convergent iteration. The actual proof of the convergence is also based on this idea.


The methods of lower order

Householder's method of order 1 is just Newton's method, since: :\begin x_ =& x_n + 1\,\frac \\ 7em=& x_n + \frac\cdot\left(\frac\right)^\\ 7em=& x_n - \frac. \end For Householder's method of order 2 one gets
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
, since the identities :\textstyle (1/f)'(x)=-\frac\ and :\textstyle\ (1/f)''(x)=-\frac+2\frac result in :\begin x_ =& x_n + 2\,\frac \\ em=& x_n + \frac\\ em=& x_n - \frac\\ em=& x_n + h_n\;\frac. \end In the last line, h_n=-\tfrac is the update of the Newton iteration at the point x_n. This line was added to demonstrate where the difference to the simple Newton's method lies. The third order method is obtained from the identity of the third order derivative of :\textstyle (1/f)(x)=-\frac+6\frac-6\frac and has the formula :\begin x_ =& x_n + 3\,\frac \\ em=& x_n - \frac\\ em=& x_n + h_n\frac \end and so on.


Example

The first problem solved by Newton with the Newton-Raphson-Simpson method was the polynomial equation y^3-2y-5=0. He observed that there should be a solution close to 2. Replacing transforms the equation into :0=f(x)=-1 + 10 x + 6 x^2 + x^3. The Taylor series of the reciprocal function starts with :\begin 1/f(x)=& - 1 - 10\,x - 106 \,x^2 - 1121 \,x^3 - 11856 \,x^4 - 125392 \,x^5\\ & - 1326177 \,x^6 - 14025978 \,x^7 - 148342234 \,x^8 - 1568904385 \,x^9\\ & - 16593123232 \,x^ +O(x^) \end The result of applying Householder's methods of various orders at is also obtained by dividing neighboring coefficients of the latter power series. For the first orders one gets the following values after just one iteration step: For an example, in the case of the 3rd order, x_1 = 0.0 + 106/1121 = 0.09455842997324. As one can see, there are a little bit more than correct decimal places for each order d. The first one hundred digits of the correct solution are . Let's calculate the x_2, x_3, x_4 values for some lowest order, : f = -1 + 10x + 6x^2 + x^3 : f^\prime = 10 + 12x + 3x^2 : f^ = 12 + 6x : f^ = 6 And using following relations, : 1st order; x_ = x_ -f(x_i)/f^(x_i) : 2nd order; x_ = x_ - 2ff^ / (2^2 - ff^) : 3rd order; x_ = x_ - (6f ^2 - 3f^2 f^) / (6^3 -6 f f^f^ + f^2f^)


Derivation

An exact derivation of Householder's methods starts from the Padé approximation of order of the function, where the approximant with linear
numerator A fraction (from la, fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight ...
is chosen. Once this has been achieved, the update for the next approximation results from computing the unique zero of the numerator. The Padé approximation has the form :f(x+h)=\frac+O(h^). The rational function has a zero at h=-a_0. Just as the Taylor polynomial of degree has coefficients that depend on the function , the Padé approximation also has coefficients dependent on and its derivatives. More precisely, in any Padé approximant, the degrees of the numerator and denominator polynomials have to add to the order of the approximant. Therefore, b_d=0 has to hold. One could determine the Padé approximant starting from the Taylor polynomial of using
Euclid's algorithm In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's ''Topics in Algebra'' and Serge Lang's ''Algebra'', use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is an ef ...
. However, starting from the Taylor polynomial of is shorter and leads directly to the given formula. Since : (1/f)(x+h) = (1/f)(x)+(1/f)'(x)h+\cdots+(1/f)^(x)\frac+(1/f)^(x)\frac+O(h^) has to be equal to the inverse of the desired rational function, we get after multiplying with a_0+h in the power h^d the equation :0=b_d=a_0(1/f)^(x)\frac1+(1/f)^(x)\frac1. Now, solving the last equation for the zero h=-a_0 of the numerator results in :\begin h&=-a_0= \frac\\ &=d\,\frac \end. This implies the iteration formula :x_ = x_n + d\; \frac .


Relation to Newton's method

Householder's method applied to the real-valued function is the same as Newton's method applied to the function : :x_ = x_n - \frac with :g(x) = \left, (1/f)^\^\,. In particular, gives Newton's method unmodified and gives Halley's method.


References


External links

* {{cite web, title=Newton's method and high order iteration, url=http://numbers.computation.free.fr/Constants/Algorithms/newton.html, author=Pascal Sebah and Xavier Gourdon, year=2001 ''Note'': Use the PostScript version of this link; the website version is not compiled correctly. Root-finding algorithms