Automatic differentiation
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
and
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions ...
, automatic differentiation (AD), also called algorithmic differentiation, computational differentiation, auto-differentiation, or simply autodiff, is a set of techniques to evaluate the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions ( exp, log, sin,
cos Cos, COS, CoS, coS or Cos. may refer to: Mathematics, science and technology * Carbonyl sulfide * Class of service (CoS or COS), a network header field defined by the IEEE 802.1p task group * Class of service (COS), a parameter in telephone syst ...
, etc.). By applying the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program. Automatic differentiation is distinct from symbolic differentiation and numerical differentiation. Symbolic differentiation faces the difficulty of converting a computer program into a single
mathematical expression In mathematics, an expression or mathematical expression is a finite combination of symbols that is well-formed according to rules that depend on the context. Mathematical symbols can designate numbers ( constants), variables, operations, f ...
and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce
round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
s in the
discretization In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerica ...
process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to ''many'' inputs, as is needed for
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
-based
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
algorithms. Automatic differentiation solves all of these problems.


The chain rule, forward and reverse accumulation

Fundamental to AD is the decomposition of differentials provided by the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
. For the simple composition \begin y &= f(g(h(x))) = f(g(h(w_0))) = f(g(w_1)) = f(w_2) = w_3 \\ w_0 &= x \\ w_1 &= h(w_0) \\ w_2 &= g(w_1) \\ w_3 &= f(w_2) = y \end the chain rule gives \frac = \frac \frac \frac = \frac \frac \frac Usually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute dw_1/dx and then dw_2/dw_1 and at last dy/dw_2), while reverse accumulation has the traversal from outside to inside (first compute dy/dw_2 and then dw_2/dw_1 and at last dw_1/dx). More succinctly, # forward accumulation computes the recursive relation: \frac = \frac \frac with w_3 = y, and, # reverse accumulation computes the recursive relation: \frac = \frac \frac with w_0 = x.


Forward accumulation

In forward accumulation AD, one first fixes the ''independent variable'' with respect to which differentiation is performed and computes the derivative of each sub- expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the ''inner'' functions in the chain rule: \begin \frac &= \frac \frac \\ pt&= \frac \left(\frac \frac\right) \\ pt&= \frac \left(\frac \left(\frac \frac\right)\right) \\ pt&= \cdots \end This can be generalized to multiple variables as a matrix product of
Jacobian In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to: * Jacobian matrix and determinant * Jacobian elliptic functions * Jacobian variety *Intermediate Jacobian In mathematics, the intermediate Jacobian of a compact Kähle ...
s. Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable is augmented with its derivative (stored as a numerical value, not a symbolic expression), \dot w = \frac as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule. As an example, consider the function: \begin z &= f(x_1, x_2) \\ &= x_1 x_2 + \sin x_1 \\ &= w_1 w_2 + \sin w_1 \\ &= w_3 + w_4 \\ &= w_5 \end For clarity, the individual sub-expressions have been labeled with the variables . The choice of the independent variable to which differentiation is performed affects the ''seed'' values and . Given interest in the derivative of this function with respect to , the seed values should be set to: \begin \dot w_1 = \frac = 1 \\ \dot w_2 = \frac = 0 \end With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph. : To compute the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
of this example function, which requires the derivatives of with respect to not only but also , an ''additional'' sweep is performed over the computational graph using the seed values \dot w_1 = 0; \dot w_2 = 1. The
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
of one sweep of forward accumulation is proportional to the complexity of the original code. Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation.


Reverse accumulation

In reverse accumulation AD, the ''dependent variable'' to be differentiated is fixed and the derivative is computed ''with respect to'' each sub- expression recursively. In a pen-and-paper calculation, the derivative of the ''outer'' functions is repeatedly substituted in the chain rule: \frac = \frac \frac = \left(\frac \frac\right) \frac = \left(\left(\frac \frac\right) \frac\right) \frac = \cdots In reverse accumulation, the quantity of interest is the ''adjoint'', denoted with a bar (); it is a derivative of a chosen dependent variable with respect to a subexpression : \bar w = \frac Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a data structure known as a Wengert list (or "tape"), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization.
Checkpointing Checkpointing is a technique that provides fault tolerance for computing systems. It basically consists of saving a snapshot of the application's state, so that applications can restart from that point in case of failure. This is particularly ...
is also used to save intermediary states. The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order): The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function in the primal causes in the adjoint; etc. Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation. Reverse mode AD was first published in 1976 by Seppo Linnainmaa.
Backpropagation In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions gener ...
of errors in multilayer perceptrons, a technique used in
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
, is a special case of reverse mode AD.


Beyond forward and reverse accumulation

Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the ''optimal Jacobian accumulation'' (OJA) problem, which is
NP-complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.


Automatic differentiation using dual numbers

Forward mode automatic differentiation is accomplished by augmenting the
algebra Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary ...
of
real numbers In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every re ...
and obtaining a new
arithmetic Arithmetic () is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers— addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th ...
. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of
dual numbers In algebra, the dual numbers are a hypercomplex number system first introduced in the 19th century. They are expressions of the form , where and are real numbers, and is a symbol taken to satisfy \varepsilon^2 = 0 with \varepsilon\neq 0. Du ...
. Replace every number \,x with the number x + x'\varepsilon, where x' is a real number, but \varepsilon is an abstract number with the property \varepsilon^2=0 (an
infinitesimal In mathematics, an infinitesimal number is a quantity that is closer to zero than any standard real number, but that is not zero. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally re ...
; see '' Smooth infinitesimal analysis''). Using only this, regular arithmetic gives \begin (x + x'\varepsilon) + (y + y'\varepsilon) &= x + y + (x' + y')\varepsilon \\ (x + x'\varepsilon) - (y + y'\varepsilon) &= x - y + (x' - y')\varepsilon \\ (x + x'\varepsilon) \cdot (y + y'\varepsilon) &= xy + xy'\varepsilon + yx'\varepsilon + x'y'\varepsilon^2 = xy + (x y' + yx')\varepsilon \\ (x + x'\varepsilon) / (y + y'\varepsilon) &= (x/y + x'\varepsilon/y) / (1 + y'\varepsilon/y) = (x/y + x'\varepsilon/y) \cdot (1 - y'\varepsilon/y) = x/y + (x'/y - xy'/y^2)\varepsilon \end using (1 + y'\varepsilon/y) \cdot (1 - y'\varepsilon/y) = 1. Now,
polynomials In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exam ...
can be calculated in this augmented arithmetic. If P(x) = p_0 + p_1 x + p_2x^2 + \cdots + p_n x^n, then \begin P(x + x'\varepsilon) &= p_0 + p_1(x + x'\varepsilon) + \cdots + p_n (x + x'\varepsilon)^n \\ &= p_0 + p_1 x + \cdots + p_n x^n + p_1x'\varepsilon + 2p_2xx'\varepsilon + \cdots + np_n x^ x'\varepsilon \\ &= P(x) + P^(x)x'\varepsilon \end where P^ denotes the derivative of P with respect to its first argument, and x', called a ''seed'', can be chosen arbitrarily. The new arithmetic consists of
ordered pair In mathematics, an ordered pair (''a'', ''b'') is a pair of objects. The order in which the objects appear in the pair is significant: the ordered pair (''a'', ''b'') is different from the ordered pair (''b'', ''a'') unless ''a'' = ''b''. (In con ...
s, elements written \langle x, x' \rangle, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic: \begin \left\langle u,u'\right\rangle + \left\langle v,v'\right\rangle &= \left\langle u + v, u' + v' \right\rangle \\ \left\langle u,u'\right\rangle - \left\langle v,v'\right\rangle &= \left\langle u - v, u' - v' \right\rangle \\ \left\langle u,u'\right\rangle * \left\langle v,v'\right\rangle &= \left\langle u v, u'v + uv' \right\rangle \\ \left\langle u,u'\right\rangle / \left\langle v,v'\right\rangle &= \left\langle \frac, \frac \right\rangle \quad ( v\ne 0) \\ \sin\left\langle u,u'\right\rangle &= \left\langle \sin(u) , u' \cos(u) \right\rangle \\ \cos\left\langle u,u'\right\rangle &= \left\langle \cos(u) , -u' \sin(u) \right\rangle \\ \exp\left\langle u,u'\right\rangle &= \left\langle \exp u , u' \exp u \right\rangle \\ \log\left\langle u,u'\right\rangle &= \left\langle \log(u) , u'/u \right\rangle \quad (u>0) \\ \left\langle u,u'\right\rangle^k &= \left\langle u^k , u' k u^ \right\rangle \quad (u \ne 0) \\ \left, \left\langle u,u'\right\rangle \ &= \left\langle \left, u \ , u' \operatorname u \right\rangle \quad (u \ne 0) \end and in general for the primitive function g, g(\langle u,u' \rangle , \langle v,v' \rangle ) = \langle g(u,v) , g_u(u,v) u' + g_v(u,v) v' \rangle where g_u and g_v are the derivatives of g with respect to its first and second arguments, respectively. When a binary basic arithmetic operation is applied to mixed arguments—the pair \langle u, u' \rangle and the real number c—the real number is first lifted to \langle c, 0 \rangle. The derivative of a function f : \R\to\R at the point x_0 is now found by calculating f(\langle x_0, 1 \rangle) using the above arithmetic, which gives \langle f ( x_0 ) , f' ( x_0 ) \rangle as the result.


Vector arguments and functions

Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute y' = \nabla f(x)\cdot x', the directional derivative y' \in \R^m of f:\R^n\to\R^m at x \in \R^n in the direction x' \in \R^n may be calculated as (\langle y_1,y'_1\rangle, \ldots, \langle y_m,y'_m\rangle) = f(\langle x_1,x'_1\rangle, \ldots, \langle x_n,x'_n\rangle) using the same arithmetic as above. If all the elements of \nabla f are desired, then n function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.


High order and many variables

The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated
Taylor polynomial In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted.


Implementation

Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: ''source code transformation'' or ''operator overloading''.


Source code transformation (SCT)

The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
Source code transformation A program transformation is any operation that takes a computer program and generates another program. In many cases the transformed program is required to be semantically equivalent to the original, relative to a particular formal semantics and ...
can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex.


Operator overloading (OO)

Operator overloading In computer programming, operator overloading, sometimes termed ''operator ad hoc polymorphism'', is a specific case of polymorphism, where different operators have different implementations depending on their arguments. Operator overloading i ...
is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Examples of operator-overloading implementations of automatic differentiation in C++ are the
Adept An adept is an individual identified as having attained a specific level of knowledge, skill, or aptitude in doctrines relevant to a particular author or organization. He or she stands out from others with their great abilities. All human quali ...
, the NAG's dco library, and the Stan libraries.


See also

* Differentiable programming


Notes


References


Further reading

* * * * *


External links


www.autodiff.org
An "entry site to everything you want to know about automatic differentiation"
Automatic Differentiation of Parallel OpenMP Programs

Automatic Differentiation, C++ Templates and Photogrammetry

Automatic Differentiation, Operator Overloading Approach

Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface
Automatic Differentiation of Fortran programs
Description and example code for forward Automatic Differentiation in Scala

finmath-lib stochastic automatic differentiation
Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation).
Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem


an
implementation

Tangent

Exact First- and Second-Order Greeks by Algorithmic Differentiation

Adjoint Algorithmic Differentiation of a GPU Accelerated Application

Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop

More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors
{{DEFAULTSORT:Automatic Differentiation Differential calculus Computer algebra