HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods th ...
, numerical differentiation
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s estimate the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
of a
mathematical function In mathematics, a function from a set to a set assigns to each element of exactly one element of .; the words map, mapping, transformation, correspondence, and operator are often used synonymously. The set is called the domain of the func ...
or function
subroutine In computer programming, a function or subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Functions ma ...
using values of the function and perhaps other knowledge about the function.


Finite differences

The simplest method is to use finite difference approximations. A simple two-point estimation is to compute the slope of a nearby
secant line Secant is a term in mathematics derived from the Latin ''secare'' ("to cut"). It may refer to: * a secant line, in geometry * the secant variety, in algebraic geometry * secant (trigonometry) (Latin: secans), the multiplicative inverse (or recip ...
through the points (''x'', ''f''(''x'')) and (''x'' + ''h'', ''f''(''x'' + ''h'')). Choosing a small number ''h'', ''h'' represents a small change in ''x'', and it can be either positive or negative. The slope of this line is : \frac. This expression is Newton's
difference quotient In single-variable calculus, the difference quotient is usually the name for the expression : \frac which when taken to the limit as ''h'' approaches 0 gives the derivative of the function ''f''. The name of the expression stems from the fact ...
(also known as a first-order divided difference). The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to ''h''. As ''h'' approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of ''f'' at ''x'' is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line: : f'(x) = \lim_ \frac. Since immediately substituting 0 for ''h'' results in \frac
indeterminate form In calculus and other branches of mathematical analysis, limits involving an algebraic combination of functions in an independent variable may often be evaluated by replacing these functions by their limits; if the expression obtained after this s ...
, calculating the derivative directly can be unintuitive. Equivalently, the slope could be estimated by employing positions (''x'' − ''h'') and ''x''. Another two-point formula is to compute the slope of a nearby secant line through the points (''x'' - ''h'', ''f''(''x'' − ''h'')) and (''x'' + ''h'', ''f''(''x'' + ''h'')). The slope of this line is : \frac. This formula is known as the symmetric difference quotient. In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to h^2. Hence for small values of ''h'' this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at ''x'', the value of the function at ''x'' is not involved. The estimation error is given by : R = \frac h^2, where c is some point between x - h and x + h. This error does not include the
rounding error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are ...
due to numbers being represented and calculations being performed in limited precision. The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including
TI-82 The TI-82 is a graphing calculator made by Texas Instruments. The TI-82 was designed in 1993 as a stripped down, more user friendly version of the TI-85, and as a replacement for the TI-81. It was the direct predecessor of the TI-83. It shares ...
, TI-83, TI-84, TI-85, all of which use this method with ''h'' = 0.001.


Step size

An important consideration in practice when the function is calculated using
floating-point arithmetic In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
of finite precision is the choice of step size, ''h''. If chosen too small, the subtraction will yield a large
rounding error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are ...
. In fact, all the finite-difference formulae are ill-conditionedNumerical Differentiation of Analytic Functions, B Fornberg – ACM Transactions on Mathematical Software (TOMS), 1981. and due to cancellation will produce a value of zero if ''h'' is small enough.Using Complex Variables to Estimate Derivatives of Real Functions, W. Squire, G. Trapp – SIAM REVIEW, 1998. If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse. For basic central differences, the optimal step is the cube-root of
machine epsilon Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the sub ...
. For the numerical derivative formula evaluated at ''x'' and ''x'' + ''h'', a choice for ''h'' that is small without producing a large rounding error is \sqrt x (though not when ''x'' = 0), where the
machine epsilon Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the sub ...
''ε'' is typically of the order of 2.2 for
double precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
. A formula for ''h'' that balances the rounding error against the secant error for optimum accuracy is : h = 2\sqrt (though not when f''(x) = 0), and to employ it will require knowledge of the function. For computer calculations the problems are exacerbated because, although ''x'' necessarily holds a
representable floating-point number In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
in some precision (32 or 64-bit, ''etc''.), ''x'' + ''h'' almost certainly will not be exactly representable in that precision. This means that ''x'' + ''h'' will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that (''x'' + ''h'') − ''x'' will ''not'' equal ''h''; the two function evaluations will not be exactly ''h'' apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as ''h'' = 0.1 will not be a round number in binary; it is 0.000110011001100...2 A possible approach is as follows: h := sqrt(eps) * x; xph := x + h; dx := xph - x; slope := (F(xph) - F(x)) / dx; However, with computers,
compiler optimization In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power c ...
facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that ''dx'' and ''h'' are the same. With C and similar languages, a directive that ''xph'' is a volatile variable will prevent this.


Other methods


Higher-order methods

Higher-order methods for approximating the derivative, as well as methods for higher derivatives, exist. Given below is the five-point method for the first derivative ( five-point stencil in one dimension): : f'(x) = \frac + \frac f^(c), where c \in - 2h, x + 2h/math>. For other stencil configurations and derivative orders, th
Finite Difference Coefficients Calculator
is a tool that can be used to generate derivative approximation methods for any stencil with any derivative order (provided a solution exists).


Higher derivatives

Using Newton's difference quotient, : f'(x) = \lim_ \frac the following can be shown (for ''n''>0): :f^(x) = \lim_ \frac \sum_^n (-1)^ \binom f(x + kh)


Complex-variable methods

The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if f is a
holomorphic function In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex de ...
, real-valued on the real line, which can be evaluated at points in the complex plane near x, then there are stable methods. For example, the first derivative can be calculated by the complex-step derivative formula: :f^\prime(x) = \frac + O(h^2), \quad \mathrm:=-1. The recommended step size to obtain accurate derivatives for a range of conditions is h = 10^. This formula can be obtained by
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor se ...
expansion: :f(x+\mathrmh) = f(x)+\mathrmhf^\prime(x)-h^2f''(x)/2!-\mathrmh^3f^(x)/3!+\cdots. The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs
multicomplex numbers In mathematics, the multicomplex number systems \Complex_n are defined inductively as follows: Let C0 be the real number system. For every let ''i'n'' be a square root of −1, that is, an imaginary unit. Then \Complex_ = \lbrace z = x + y ...
, resulting in multicomplex derivatives. :f^(x) \approx \frac where the \mathrm^ denote the multicomplex imaginary units; \mathrm^ \equiv \mathrm. The \mathcal^_k operator extracts the kth component of a multicomplex number of level n, e.g., \mathcal^_0 extracts the real component and \mathcal^_ extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative :\frac \approx \frac A C++ implementation of multicomplex arithmetics is available. In general, derivatives of any order can be calculated using
Cauchy's integral formula In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary o ...
: : f^(a) = \frac \oint_\gamma \frac \,\mathrmz, where the integration is done
numerically Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods th ...
. Using complex variables for numerical differentiation was started by Lyness and Moler in 1967. Their algorithm is applicable to higher-order derivatives. A method based on numerical inversion of a complex
Laplace transform In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t, in the ''time domain'') to a function of a complex variable s (in the ...
was developed by Abate and Dubner. An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.


Differential quadrature

Differential quadrature is the approximation of derivatives by using weighted sums of function values. Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with ''quadrature'', meaning
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
, where weighted sums are used in methods such as Simpson's method or the
Trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works b ...
. There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter. Differential quadrature is used to solve
partial differential equations In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. The function is often thought of as an "unknown" to be solved for, similarly to ...
. There are further methods for computing derivatives from noisy data.


See also

* * * * * * * List of numerical-analysis software


References


External links


Numerical Differentiation
from wolfram.com

at ttp://numericalmethods.eng.usf.edu/ Numerical Methods for STEM Undergraduate* tp://math.nist.gov/pub/repository/diff/src/DIFF Fortran code for the numerical differentiation of a function using Neville's process to extrapolate from a sequence of simple polynomial approximations.
NAG Library numerical differentiation routines
* http://graphulator.co
Online numerical graphing calculator with calculus function.


* ttps://blogs.mathworks.com/cleve/2013/10/14/complex-step-differentiation/ Complex Step Differentiationbr>Differentiation With(out) a Difference
by Nicholas Higham,
SIAM Thailand ( ), historically known as Siam () and officially the Kingdom of Thailand, is a country in Southeast Asia, located at the centre of the Indochinese Peninsula, spanning , with a population of almost 70 million. The country is bo ...
News.
findiff Python project
{{DEFAULTSORT:Numerical Differentiation Numerical analysis Differential calculus