HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after
William George Horner William George Horner (9 June 1786 – 22 September 1837) was a British mathematician. Proficient in classics and mathematics, he was a schoolmaster, headmaster and schoolkeeper who wrote extensively on functional equations, number theory and ...
, this method is much older, as it has been attributed to
Joseph-Louis Lagrange Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangia\begin &a_0 + a_1x + a_2x^2 + a_3x^3 + \cdots + a_nx^n \\ = &a_0 + x \bigg(a_1 + x \Big(a_2 + x \big(a_3 + \cdots + x(a_ + x \, a_n) \cdots \big) \Big) \bigg). \end This allows the evaluation of a
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
of degree with only n multiplications and n additions. This is optimal, since there are polynomials of degree that cannot be evaluated with fewer arithmetic operations. Alternatively, Horner's method and also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970.


Polynomial evaluation and long division

Given the polynomial p(x) = \sum_^n a_i x^i = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n, where a_0, \ldots, a_n are constant coefficients, the problem is to evaluate the polynomial at a specific value x_0 of x. For this, a new sequence of constants is defined recursively as follows: Then b_0 is the value of p(x_0). To see why this works, the polynomial can be written in the form p(x) = a_0 + x \bigg(a_1 + x \Big(a_2 + x \big(a_3 + \cdots + x(a_ + x \, a_n) \cdots \big) \Big) \bigg) \ . Thus, by iteratively substituting the b_i into the expression, \begin p(x_0) & = a_0 + x_0\Big(a_1 + x_0\big(a_2 + \cdots + x_0(a_ + b_n x_0) \cdots \big)\Big) \\ & = a_0 + x_0\Big(a_1 + x_0\big(a_2 + \cdots + x_0 b_\big)\Big) \\ & ~~ \vdots \\ & = a_0 + x_0 b_1 \\ & = b_0. \end Now, it can be proven that; This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; p(x) / (x-x_0) with b_0 (which is equal to p(x_0)) being the division's remainder, as is demonstrated by the examples below. If x_0 is a root of p(x), then b_0 = 0 (meaning the remainder is 0), which means you can factor p(x) as x-x_0. To finding the consecutive b-values, you start with determining b_n, which is simply equal to a_n. Then you then work recursively using the formula: b_ = a_ + b_x_0 till you arrive at b_0.


Examples

Evaluate f(x)=2x^3-6x^2+2x-1 for x=3. We use synthetic division as follows: ''x''│ ''x'' ''x'' ''x'' ''x'' 3 │ 2 −6 2 −1 │ 6 0 6 └──────────────────────── 2 0 2 5 The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the -value ( in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of f(x) on division by x-3 is . But by the polynomial remainder theorem, we know that the remainder is f(3) . Thus, f(3) = 5. In this example, if a_3 = 2, a_2 = -6, a_1 = 2, a_0 = -1 we can see that b_3 = 2, b_2 = 0, b_1 = 2, b_0 = 5 , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method. As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of f(x) on division by x-3 . The remainder is . This makes Horner's method useful for polynomial long division. Divide x^3-6x^2+11x-6 by x-2: 2 │ 1 −6 11 −6 │ 2 −8 6 └──────────────────────── 1 −4 3 0 The quotient is x^2-4x+3. Let f_1(x)=4x^4-6x^3+3x-5 and f_2(x)=2x-1. Divide f_1(x) by f_2\,(x) using Horner's method. 0.5 │ 4 −6 0 3 −5 │ 2 −2 −1 1 └─────────────────────── 2 −2 −1 1 −4 The third row is the sum of the first two rows, divided by . Each entry in the second row is the product of with the third-row entry to the left. The answer is \frac=2x^3-2x^2-x+1-\frac.


Efficiency

Evaluation using the monomial form of a degree n polynomial requires at most n additions and (n^2+n)/2 multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to n additions and 2n-1 multiplications by evaluating the powers of x by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately 2n times the number of bits of x: the evaluated polynomial has approximate magnitude x^n, and one must also store x^n itself. By contrast, Horner's method requires only n additions and n multiplications, and its storage requirements are only n times the number of bits of x. Alternatively, Horner's method can be computed with n fused multiply–adds. Horner's method can also be extended to evaluate the first k derivatives of the polynomial with kn additions and multiplications. Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when x is a matrix, Horner's method is not optimal. This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-n polynomial can be evaluated using only +2 multiplications and n additions.


Parallel evaluation

A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows: \begin p(x) & = \sum_^n a_i x^i \\ ex& = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n \\ ex& = \left( a_0 + a_2 x^2 + a_4 x^4 + \cdots\right) + \left(a_1 x + a_3 x^3 + a_5 x^5 + \cdots \right) \\ ex& = \left( a_0 + a_2 x^2 + a_4 x^4 + \cdots\right) + x \left(a_1 + a_3 x^2 + a_5 x^4 + \cdots \right) \\ ex& = \sum_^ a_ x^ + x \sum_^ a_ x^ \\ ex& = p_0(x^2) + x p_1(x^2). \end More generally, the summation can be broken into ''k'' parts: p(x) = \sum_^n a_i x^i = \sum_^ x^j \sum_^ a_ x^ = \sum_^ x^j p_j(x^k) where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows ''k''-way
SIMD Single instruction, multiple data (SIMD) is a type of parallel computer, parallel processing in Flynn's taxonomy. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneousl ...
execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for
floating-point In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a ''significand'' (a Sign (mathematics), signed sequence of a fixed number of digits in some Radix, base) multiplied by an integer power of that ba ...
calculations this requires enabling (unsafe) reassociative math. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of
instruction-level parallelism Instruction-level parallelism (ILP) is the Parallel computing, parallel or simultaneous execution of a sequence of Instruction set, instructions in a computer program. More specifically, ILP refers to the average number of instructions run per st ...
.


Application to floating-point multiplication and division

Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a
microcontroller A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Pro ...
with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) a_i = 1, and x = 2. Then, ''x'' (or ''x'' to some power) is repeatedly factored out. In this
binary numeral system A binary number is a number expressed in the base-2 numeral system or binary numeral system, a method for representing numbers that uses only two symbols for the natural numbers: typically "0" ( zero) and "1" ( one). A ''binary number'' may als ...
(base 2), x = 2, so powers of 2 are repeatedly factored out.


Example

For example, to find the product of two numbers (0.15625) and ''m'': \begin (0.15625) m & = (0.00101_b) m = \left( 2^ + 2^ \right) m = \left( 2^)m + (2^ \right)m \\ & = 2^ \left(m + \left(2^\right)m\right) = 2^ \left(m + 2^ (m)\right). \end


Method

To find the product of two binary numbers ''d'' and ''m'': # A register holding the intermediate result is initialized to ''d''. # Begin with the least significant (rightmost) non-zero bit in ''m''. # If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in ''m''.


Derivation

In general, for a binary number with bit values ( d_3 d_2 d_1 d_0 ) the product is (d_3 2^3 + d_2 2^2 + d_1 2^1 + d_0 2^0)m = d_3 2^3 m + d_2 2^2 m + d_1 2^1 m + d_0 2^0 m. At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or
division by zero In mathematics, division by zero, division (mathematics), division where the divisor (denominator) is 0, zero, is a unique and problematic special case. Using fraction notation, the general example can be written as \tfrac a0, where a is the di ...
is not an issue, despite this implication in the factored equation: = d_0\left(m + 2 \frac \left(m + 2 \frac \left(m + 2 \frac (m)\right)\right)\right). The denominators all equal one (or the term is absent), so this reduces to = d_0(m + 2 (m + 2 (m + 2 (m)))), or equivalently (as consistent with the "method" described above) = d_3(m + 2^ (m + 2^ (m + (m)))). In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative
identity element In mathematics, an identity element or neutral element of a binary operation is an element that leaves unchanged every element when the operation is applied. For example, 0 is an identity element of the addition of real numbers. This concept is use ...
), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and
subtraction Subtraction (which is signified by the minus sign, –) is one of the four Arithmetic#Arithmetic operations, arithmetic operations along with addition, multiplication and Division (mathematics), division. Subtraction is an operation that repre ...
. The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the " canonical signed digit" (CSD) form is used) and uses only 20% of the code space.


Other applications

Horner's method can be used to convert between different positional
numeral system A numeral system is a writing system for expressing numbers; that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner. The same sequence of symbols may represent differe ...
s – in which case ''x'' is the base of the number system, and the ''a''''i'' coefficients are the digits of the base-''x'' representation of a given number – and can also be used if ''x'' is a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known.


Polynomial root finding

Using the long division algorithm in combination with
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial p_n(x) of degree n with zeros z_n < z_ < \cdots < z_1, make some initial guess x_0 such that z_1 < x_0 . Now iterate the following two steps: # Using
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
, find the largest zero z_1 of p_n(x) using the guess x_0. # Using Horner's method, divide out (x-z_1) to obtain p_. Return to step 1 but use the polynomial p_ and the initial guess z_1. These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.


Example

Consider the polynomial p_6(x) = (x+8)(x+5)(x+3)(x-2)(x-3)(x-7) which can be expanded to p_6(x) = x^6 + 4x^5 - 72x^4 -214x^3 + 1127x^2 + 1602x -5040. From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next p(x) is divided by (x-7) to obtain p_5(x) = x^5 + 11x^4 + 5x^3 - 179x^2 - 126x + 720 which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by (x-3) to obtain p_4(x) = x^4 + 14x^3 + 47x^2 - 38x - 240 which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain p_3(x) = x^3 + 16x^2 + 79x + 120 which is shown in green and found to have a zero at −3. This polynomial is further reduced to p_2(x) = x^2 + 13x + 40 which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing p_2(x) and solving the
linear equation In mathematics, a linear equation is an equation that may be put in the form a_1x_1+\ldots+a_nx_n+b=0, where x_1,\ldots,x_n are the variables (or unknowns), and b,a_1,\ldots,a_n are the coefficients, which are often real numbers. The coeffici ...
. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.


Divided difference of a polynomial

Horner's method can be modified to compute the divided difference (p(y) - p(x))/(y - x). Given the polynomial (as before) p(x) = \sum_^n a_i x^i = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n, proceed as follows \begin b_n & = a_n, &\quad d_n &= b_n, \\ b_ & = a_ + b_n x, &\quad d_ &= b_ + d_n y, \\ & \ \ \vdots &\quad & \ \ \vdots\\ b_1 & = a_1 + b_2 x, &\quad d_1 &= b_1 + d_2 y,\\ b_0 & = a_0 + b_1 x. \end At completion, we have \begin p(x) &= b_0, \\ \frac &= d_1, \\ p(y) &= b_0 + (y - x) d_1. \end This computation of the divided difference is subject to less
round-off error In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
than evaluating p(x) and p(y) separately, particularly when x \approx y. Substituting y = x in this method gives d_1 = p'(x), the derivative of p(x).


History

Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation",. wa
read
before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of ''Philosophical Transactions of the Royal Society of London'' for 1819 was warmly and expansively welcomed by
reviewer
in the issue of ''The Monthly Review: or, Literary Journal'' for April, 1820; in comparison, a technical paper by
Charles Babbage Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer. Babbage is considered ...
is dismissed curtly in this review. The sequence of reviews in ''The Monthly Review'' for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820). Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini. Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to: * Paolo Ruffini in 1809 (see Ruffini's rule). *
Isaac Newton Sir Isaac Newton () was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author. Newton was a key figure in the Scientific Revolution and the Age of Enlightenment, Enlightenment that followed ...
in 1669 * the Chinese mathematician Zhu Shijie in the 14th century * the Chinese mathematician
Qin Jiushao Qin Jiushao (, ca. 1202–1261), courtesy name Daogu (道古), was a Chinese mathematician, meteorologist, inventor, politician, and writer. He is credited for discovering Horner's method as well as inventing Tianchi basins, a type of rain gau ...
in his '' Mathematical Treatise in Nine Sections'' in the 13th century * the Persian
mathematician A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems. Mathematicians are concerned with numbers, data, quantity, mathematical structure, structure, space, Mathematica ...
Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of
cubic equation In algebra, a cubic equation in one variable is an equation of the form ax^3+bx^2+cx+d=0 in which is not zero. The solutions of this equation are called roots of the cubic function defined by the left-hand side of the equation. If all of th ...
) * the Chinese mathematician Jia Xian in the 11th century (
Song dynasty The Song dynasty ( ) was an Dynasties of China, imperial dynasty of China that ruled from 960 to 1279. The dynasty was founded by Emperor Taizu of Song, who usurped the throne of the Later Zhou dynasty and went on to conquer the rest of the Fiv ...
) * ''
The Nine Chapters on the Mathematical Art ''The Nine Chapters on the Mathematical Art'' is a Chinese mathematics book, composed by several generations of scholars from the 10th–2nd century BCE, its latest stage being from the 1st century CE. This book is one of the earliest surviving ...
'', a Chinese work of the
Han dynasty The Han dynasty was an Dynasties of China, imperial dynasty of China (202 BC9 AD, 25–220 AD) established by Liu Bang and ruled by the House of Liu. The dynasty was preceded by the short-lived Qin dynasty (221–206 BC ...
(202 BC – 220 AD) edited by
Liu Hui Liu Hui () was a Chinese mathematician who published a commentary in 263 CE on ''Jiu Zhang Suan Shu ( The Nine Chapters on the Mathematical Art).'' He was a descendant of the Marquis of Zixiang of the Eastern Han dynasty and lived in the state ...
(fl. 3rd century).
Qin Jiushao Qin Jiushao (, ca. 1202–1261), courtesy name Daogu (道古), was a Chinese mathematician, meteorologist, inventor, politician, and writer. He is credited for discovering Horner's method as well as inventing Tianchi basins, a type of rain gau ...
, in his ''Shu Shu Jiu Zhang'' ('' Mathematical Treatise in Nine Sections''; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in ''Development of Mathematics in China and Japan'' (Leipzig 1913) wrote: Ulrich Libbrecht concluded: ''It is obvious that this procedure is a Chinese invention ... the method was not known in India''. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese.. The extraction of square and cube roots along similar lines is already discussed by
Liu Hui Liu Hui () was a Chinese mathematician who published a commentary in 263 CE on ''Jiu Zhang Suan Shu ( The Nine Chapters on the Mathematical Art).'' He was a descendant of the Marquis of Zixiang of the Eastern Han dynasty and lived in the state ...
in connection with Problems IV.16 and 22 in ''Jiu Zhang Suan Shu'', while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.


See also

* Clenshaw algorithm to evaluate polynomials in Chebyshev form * De Boor's algorithm to evaluate splines in
B-spline In numerical analysis, a B-spline (short for basis spline) is a type of Spline (mathematics), spline function designed to have minimal Support (mathematics), support (overlap) for a given Degree of a polynomial, degree, smoothness, and set of bre ...
form * De Casteljau's algorithm to evaluate polynomials in Bézier form * Estrin's scheme to facilitate parallelization on modern computer architectures * Lill's method to approximate roots graphically * Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r


Notes


References

* * Read before the Southwestern Section of the American Mathematical Society on November 26, 1910. * * * * * *: Holdred's method is in the supplement following page numbered 45 (which is the 52nd page of the pdf version). * *: Directly available online via the link, but also reprinted with appraisal in D.E. Smith: ''A Source Book in Mathematics'', McGraw-Hill, 1929; Dover reprint, 2 vols, 1959. * * * * * * * * * * * * *: Reprinted from issues of ''The North China Herald'' (1852).


External links

* * Qiu Jin-Shao
Shu Shu Jiu Zhang
(Cong Shu Ji Cheng ed.) * For more on the root-finding application se

{{Polynomials Computer algebra Polynomials Numerical analysis