Divided Differences
   HOME

TheInfoList



OR:

In mathematics, divided differences is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
, historically used for computing tables of
logarithms In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 of ...
and
trigonometric functions In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in al ...
. Charles Babbage's difference engine, an early
mechanical calculator A mechanical calculator, or calculating machine, is a mechanical device used to perform the basic operations of arithmetic automatically, or (historically) a simulation such as an analog computer or a slide rule. Most mechanical calculators we ...
, was designed to use this algorithm in its operation. Divided differences is a
recursive Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics ...
division Division or divider may refer to: Mathematics *Division (mathematics), the inverse of multiplication *Division algorithm, a method for computing the result of mathematical division Military *Division (military), a formation typically consisting ...
process. Given a sequence of data points (x_0, y_0),\ldots,(x_, y_), the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.


Definition

Given ''n'' + 1 data points :(x_0, y_0),\ldots,(x_, y_) where the x_k are assumed to be pairwise distinct, the forward divided differences are defined as: : _k:= y_k, \qquad k \in \ : _k,\ldots,y_:= \frac, \qquad k\in\,\ j\in\. To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of ''j'' above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding ''x-''values: : \begin x_0 & y_0 = _0& & & \\ & & _0,y_1& & \\ x_1 & y_1 = _1& & _0,y_1,y_2& \\ & & _1,y_2& & _0,y_1,y_2,y_3\ x_2 & y_2 = _2& & _1,y_2,y_3& \\ & & _2,y_3& & \\ x_3 & y_3 = _3& & & \\ \end


Notation

Note that the divided difference _k,\ldots,y_/math> depends on the values x_k,\ldots,x_ and y_k,\ldots,y_, but the notation hides the dependency on the ''x''-values. If the data points are given by a function ''ƒ'', :(x_0, f(x_0)),\ldots,(x_, f(x_)) one sometimes writes :f _k,\ldots,x_/math> for the divided difference instead of writing : (x_k),\ldots,f(x_)/math> or : _,\ldots,y_/math>. Several other notations for the divided difference of the function ''ƒ'' on the nodes ''x''0, ..., ''x''''n'' are also used, for instance: : _0,\ldots,x_n, : _0,\ldots,x_n;f : D _0,\ldots,x_n


Example

Divided differences for k=0 and the first few values of j: : \begin \mathopen _0&= y_0 \\ \mathopen _0,y_1&= \frac \\ \mathopen _0,y_1,y_2&= \frac = \frac = \frac-\frac \\ \mathopen _0,y_1,y_2,y_3&= \frac \end


Properties

*
Linearity Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
:: (f+g) _0,\dots,x_n= f _0,\dots,x_n+ g _0,\dots,x_n/math> :: (\lambda\cdot f) _0,\dots,x_n= \lambda\cdot f _0,\dots,x_n/math> * Leibniz rule :: (f\cdot g) _0,\dots,x_n= f _0cdot g _0,\dots,x_n+ f _0,x_1cdot g _1,\dots,x_n+ \dots + f _0,\dots,x_ncdot g _n\sum_^n f _0,\ldots,x_rcdot g _r,\ldots,x_n/math> * Divided differences are symmetric: If \sigma : \ \to \ is a permutation then :: f _0, \dots, x_n= f _, \dots, x_/math> *
Polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
in the Newton form: if P is a polynomial function of degree \leq n, and p _0, ... , x_n/math> is the divided difference, then :: P_(x)=p _0p _0,x_1x-x_0)+p _0,x_1,x_2x-x_0)(x-x_1)+\cdots + p _0,\ldots,x_n(x-x_0)(x-x_1)\cdots(x-x_) * If p is a polynomial function of degree , then :: p _0, \dots, x_n= 0. *
Mean value theorem for divided differences In mathematical analysis, the mean value theorem for divided differences generalizes the mean value theorem to higher derivatives. Statement of the theorem For any ''n'' + 1 pairwise distinct points ''x''0, ..., ''x'n'' in ...
: if f is ''n'' times differentiable, then :: f _0,\dots,x_n= \frac for a number \xi in the open interval determined by the smallest and largest of the x_k's.


Matrix form

The divided difference scheme can be put into an upper
triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
: :T_f(x_0,\dots,x_n)= \begin f _0& f _0,x_1& f _0,x_1,x_2& \ldots & f _0,\dots,x_n\\ 0 & f _1 & f _1,x_2 & \ldots & f _1,\dots,x_n\\ 0 & 0 & f _2 & \ldots & f _2,\dots,x_n\\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & f _n\end. Then it holds * T_(x) = T_f(x) + T_g(x) * T_(x) = \lambda T_f(x) if \lambda is a scalar * T_(x) = T_f(x) \cdot T_g(x) :: This follows from the Leibniz rule. It means that multiplication of such matrices is
commutative In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of ...
. Summarised, the matrices of divided difference schemes with respect to the same set of nodes ''x'' form a commutative ring. * Since T_f(x) is a triangular matrix, its
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s are obviously f(x_0), \dots, f(x_n) . * Let \delta_\xi be a
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 & ...
-like function, that is :: \delta_\xi(t) = \begin1 &: t=\xi , \\0 &: \mbox.\end : Obviously f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi, thus \delta_\xi is an
eigenfunction In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, th ...
of the pointwise function multiplication. That is T_(x) is somehow an "
eigenmatrix In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
" of T_f(x): T_f(x) \cdot T_ (x) = f(x_i) \cdot T_(x) . However, all columns of T_(x) are multiples of each other, the
matrix rank In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
of T_(x) is 1. So you can compose the matrix of all eigenvectors of T_f(x) from the i-th column of each T_(x). Denote the matrix of eigenvectors with U(x). Example :: U(x_0,x_1,x_2,x_3) = \begin 1 & \frac & \frac & \frac \\ 0 & 1 & \frac & \frac \\ 0 & 0 & 1 & \frac \\ 0 & 0 & 0 & 1 \end :The diagonalization of T_f(x) can be written as :: U(x) \cdot \operatorname(f(x_0),\dots,f(x_n)) = T_f(x) \cdot U(x) .


Polynomials and power series

The matrix : J= \begin x_0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & x_1 & 1 & 0 & \cdots & 0 \\ 0 & 0 & x_2 & 1 & & 0 \\ \vdots & \vdots & & \ddots & \ddots & \\ 0 & 0 & 0 & 0 & \; \ddots & 1\\ 0 & 0 & 0 & 0 & & x_n \end contains the divided difference scheme for the
identity function Graph of the identity function on the real numbers In mathematics, an identity function, also called an identity relation, identity map or identity transformation, is a function that always returns the value that was used as its argument, un ...
with respect to the nodes x_0,\dots,x_n, thus J^m contains the divided differences for the
power function Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to re ...
with exponent m. Consequently, you can obtain the divided differences for a polynomial function p by applying p to the matrix J: If :p(\xi) = a_0 + a_1\cdot \xi + \dots + a_m\cdot \xi^m and :p(J) = a_0 + a_1\cdot J + \dots + a_m\cdot J^m then :T_p(x)=p(J). This is known as ''Opitz' formula''. Now consider increasing the degree of p to infinity, i.e. turn the Taylor polynomial into a
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
. Let f be a function which corresponds to a
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''an'' represents the coefficient of the ''n''th term and ''c'' is a con ...
. You can compute the divided difference scheme for f by applying the corresponding matrix series to J: If :f(\xi) = \sum_^\infty a_k \xi^k and :f(J)=\sum_^\infty a_k J^k then :T_f(x)=f(J).


Alternative characterizations


Expanded form

\begin f _0&= f(x_0) \\ f _0,x_1&= \frac + \frac \\ f _0,x_1,x_2&= \frac + \frac + \frac \\ f _0,x_1,x_2,x_3&= \frac + \frac + \frac +\\&\quad\quad \frac \\ f _0,\dots,x_n&= \sum_^ \frac \end With the help of the polynomial function \omega(\xi) = (\xi-x_0) \cdots (\xi-x_n) this can be written as : f _0,\dots,x_n= \sum_^ \frac.


Peano form

If x_0 and n\geq 1, the divided differences can be expressed as :f _0,\ldots,x_n= \frac \int_^ f^(t)\;B_(t) \, dt where f^ is the n-th
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
of the function f and B_ is a certain B-spline of degree n-1 for the data points x_0,\dots,x_n, given by the formula :B_(t)=\sum_^n\frac This is a consequence of the
Peano kernel theorem In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations (such as numerical quadratures), defined in terms of linear functionals. It is attributed to Giuseppe Peano. Statem ...
; it is called the ''Peano form'' of the divided differences and B_ is the ''Peano kernel'' for the divided differences, all named after
Giuseppe Peano Giuseppe Peano (; ; 27 August 1858 – 20 April 1932) was an Italian mathematician and glottologist. The author of over 200 books and papers, he was a founder of mathematical logic and set theory, to which he contributed much notation. The sta ...
.


Forward differences

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences. Given ''n+1'' data points :(x_0, y_0),\ldots,(x_n, y_n) with :x_ = x_0 + k h,\ \text \ k=0,\ldots,n \text h>0 the forward differences are defined as :\Delta^ y_k := y_k,\qquad k=0,\ldots,n :\Delta^y_k := \Delta^y_ - \Delta^y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \begin y_0 & & & \\ & \Delta y_0 & & \\ y_1 & & \Delta^2 y_0 & \\ & \Delta y_1 & & \Delta^3 y_0\\ y_2 & & \Delta^2 y_1 & \\ & \Delta y_2 & & \\ y_3 & & & \\ \end The relationship between divided differences and forward differences is : _0, y_1, \ldots , y_n= \frac\Delta^y_0.


See also

*
Difference quotient In single-variable calculus, the difference quotient is usually the name for the expression : \frac which when taken to the limit as ''h'' approaches 0 gives the derivative of the function ''f''. The name of the expression stems from the fact ...
*
Neville's algorithm In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given ''n'' + 1 points, there is a unique polynomial of degree ''≤ n'' which goes through the ...
*
Polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
*
Mean value theorem for divided differences In mathematical analysis, the mean value theorem for divided differences generalizes the mean value theorem to higher derivatives. Statement of the theorem For any ''n'' + 1 pairwise distinct points ''x''0, ..., ''x'n'' in ...
* Nörlund–Rice integral *
Pascal's triangle In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although o ...


References

* * * {{cite book, author=Ron Goldman, title=Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling, year=2002, publisher=Morgan Kaufmann, isbn=978-0-08-051547-2, at=Chapter 4:Newton Interpolation and Difference Triangles


External links

* A
implementation
in
Haskell Haskell () is a general-purpose, statically-typed, purely functional programming language with type inference and lazy evaluation. Designed for teaching, research and industrial applications, Haskell has pioneered a number of programming lan ...
. Finite differences de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen