HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Clenshaw algorithm, also called Clenshaw summation, is a
recursive Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in m ...
method to evaluate a linear combination of
Chebyshev polynomials The Chebyshev polynomials are two sequences of orthogonal polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: ...
. Note that this paper is written in terms of the ''Shifted'' Chebyshev polynomials of the first kind T^*_n(x) = T_n(2x-1). The method was published by Charles William Clenshaw in 1955. It is a generalization of
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
for evaluating a linear combination of
monomial In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called a power product or primitive monomial, is a product of powers of variables with n ...
s. It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
.


Clenshaw algorithm

In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions \phi_k(x): S(x) = \sum_^n a_k \phi_k(x) where \phi_k,\; k=0, 1, \ldots is a sequence of functions that satisfy the linear recurrence relation \phi_(x) = \alpha_k(x)\,\phi_k(x) + \beta_k(x)\,\phi_(x), where the coefficients \alpha_k(x) and \beta_k(x) are known in advance. The algorithm is most useful when \phi_k(x) are functions that are complicated to compute directly, but \alpha_k(x) and \beta_k(x) are particularly simple. In the most common applications, \alpha(x) does not depend on k, and \beta is a constant that depends on neither x nor k. To perform the summation for given series of coefficients a_0, \ldots, a_n, compute the values b_k(x) by the "reverse" recurrence formula: \begin b_(x) &= b_(x) = 0, \\ b_k(x) &= a_k + \alpha_k(x)\,b_(x) + \beta_(x)\,b_(x). \end Note that this computation makes no direct reference to the functions \phi_k(x). After computing b_2(x) and b_1(x), the desired sum can be expressed in terms of them and the simplest functions \phi_0(x) and \phi_1(x): S(x) = \phi_0(x)\,a_0 + \phi_1(x)\,b_1(x) + \beta_1(x)\,\phi_0(x)\,b_2(x). See Fox and Parker for more information and stability analyses.


Examples


Horner as a special case of Clenshaw

A particularly simple case occurs when evaluating a polynomial of the form S(x) = \sum_^n a_k x^k. The functions are simply \begin \phi_0(x) &= 1, \\ \phi_k(x) &= x^k = x\phi_(x) \end and are produced by the recurrence coefficients \alpha(x) = x and \beta = 0. In this case, the recurrence formula to compute the sum is b_k(x) = a_k + x b_(x) and, in this case, the sum is simply S(x) = a_0 + x b_1(x) = b_0(x), which is exactly the usual
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
.


Special case for Chebyshev series

Consider a truncated
Chebyshev series The Chebyshev polynomials are two sequences of orthogonal polynomials related to the trigonometric functions, cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with tr ...
p_n(x) = a_0 + a_1 T_1(x) + a_2 T_2(x) + \cdots + a_n T_n(x). The coefficients in the recursion relation for the
Chebyshev polynomials The Chebyshev polynomials are two sequences of orthogonal polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: ...
are \alpha(x) = 2x, \quad \beta = -1, with the initial conditions T_0(x) = 1, \quad T_1(x) = x. Thus, the recurrence is b_k(x) = a_k + 2xb_(x) - b_(x) and the final results are b_0(x) = a_0 + 2xb_1(x) - b_2(x), p_n(x) = \tfrac \left _0+b_0(x) - b_2(x)\right An equivalent expression for the sum is given by p_n(x) = a_0 + xb_1(x) - b_2(x).


Meridian arc length on the ellipsoid

Clenshaw summation is extensively used in
geodetic Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets ...
applications. A simple application is summing the trigonometric series to compute the
meridian arc In geodesy and navigation, a meridian arc is the curve (geometry), curve between two points near the Earth's surface having the same longitude. The term may refer either to a arc (geometry), segment of the meridian (geography), meridian, or to its ...
distance on the surface of an ellipsoid. These have the form m(\theta) = C_0\,\theta + C_1\sin \theta + C_2\sin 2\theta + \cdots + C_n\sin n\theta. Leaving off the initial C_0\,\theta term, the remainder is a summation of the appropriate form. There is no leading term because \phi_0(\theta) = \sin 0\theta = \sin 0 = 0. The recurrence relation for \sin k\theta is \sin (k+1)\theta = 2 \cos\theta \sin k\theta - \sin (k-1)\theta, making the coefficients in the recursion relation \alpha_k(\theta) = 2\cos\theta, \quad \beta_k = -1. and the evaluation of the series is given by \begin b_(\theta) &= b_(\theta) = 0, \\ b_k(\theta) &= C_k + 2\cos \theta \,b_(\theta) - b_(\theta),\quad\mathrm n\ge k \ge 1. \end The final step is made particularly simple because \phi_0(\theta) = \sin 0 = 0, so the end of the recurrence is simply b_1(\theta)\sin(\theta); the C_0\,\theta term is added separately: m(\theta) = C_0\,\theta + b_1(\theta)\sin \theta. Note that the algorithm requires only the evaluation of two trigonometric quantities \cos \theta and \sin \theta.


Difference in meridian arc lengths

Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write m(\theta_1)-m(\theta_2) = C_0(\theta_1-\theta_2) + \sum_^n 2 C_k \sin\bigl(k(\theta_1-\theta_2)\bigr) \cos\bigl(k(\theta_1+\theta_2)\bigr). Clenshaw summation can be applied in this case provided we simultaneously compute m(\theta_1)+m(\theta_2) and perform a matrix summation, \mathsf M(\theta_1,\theta_2) = \begin (m(\theta_1) + m(\theta_2)) / 2\\ (m(\theta_1) - m(\theta_2)) / (\theta_1 - \theta_2) \end = C_0 \begin \mu \\ 1 \end + \sum_^n C_k \mathsf F_k(\theta_1,\theta_2), where \begin \delta &= \tfrac(\theta_1-\theta_2), \\ ex \mu &= \tfrac(\theta_1+\theta_2), \\ ex \mathsf F_k(\theta_1,\theta_2) &= \begin \cos k \delta \sin k \mu \\ \dfrac\delta \cos k \mu \end. \end The first element of \mathsf M(\theta_1,\theta_2) is the average value of m and the second element is the average slope. \mathsf F_k(\theta_1,\theta_2) satisfies the recurrence relation \mathsf F_(\theta_1,\theta_2) = \mathsf A(\theta_1,\theta_2) \mathsf F_k(\theta_1,\theta_2) - \mathsf F_(\theta_1,\theta_2), where \mathsf A(\theta_1,\theta_2) = 2\begin \cos \delta \cos \mu & -\delta\sin \delta \sin \mu \\ - \displaystyle\frac\delta \sin \mu & \cos \delta \cos \mu \end takes the place of \alpha in the recurrence relation, and \beta=-1. The standard Clenshaw algorithm can now be applied to yield \begin \mathsf B_ &= \mathsf B_ = \mathsf 0, \\ ex \mathsf B_k &= C_k \mathsf I + \mathsf A \mathsf B_ - \mathsf B_, \qquad\mathrm n\ge k \ge 1, \\ ex \mathsf M(\theta_1,\theta_2) &= C_0 \begin\mu\\1\end + \mathsf B_1 \mathsf F_1(\theta_1,\theta_2), \end where \mathsf B_k are 2×2 matrices. Finally we have \frac = \mathsf M_2(\theta_1, \theta_2). This technique can be used in the limit \theta_2 = \theta_1 = \mu and \delta = 0 to simultaneously compute m(\mu) and the
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
dm(\mu)/d\mu, provided that, in evaluating \mathsf F_1 and \mathsf A, we take \lim_ (\sin k \delta)/\delta = k.


See also

* Horner scheme to evaluate polynomials in monomial form *
De Casteljau's algorithm In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to sp ...
to evaluate polynomials in Bézier form


References

{{DEFAULTSORT:Clenshaw Algorithm Numerical analysis