The adjoint state method is a
numerical method for efficiently computing the
gradient
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
of a
function
Function or functionality may refer to:
Computing
* Function key, a type of key on computer keyboards
* Function model, a structured representation of processes in a system
* Function object or functor or functionoid, a concept of object-oriente ...
or
operator in a
numerical optimization problem. It has applications in
geophysics
Geophysics () is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. The term ''geophysics'' so ...
,
seismic imaging
Geophysical imaging (also known as geophysical tomography) is a minimally destructive geophysical technique that investigates the subsurface of a terrestrial planet. Geophysical imaging is a noninvasive imaging technique with a high parametrical a ...
,
photonics
Photonics is a branch of optics that involves the application of generation, detection, and manipulation of light in form of photons through emission, transmission, modulation, signal processing, switching, amplification, and sensing. Though ...
and more recently in
neural networks.
The adjoint state space is chosen to simplify the physical interpretation of equation
constraints.
[Plessix, R-E. "A review of the adjoint-state method for computing the gradient of a functional with geophysical applications." Geophysical Journal International,2006,167(2): 495-503]
free access on GJI website
/ref>
Adjoint state techniques allow the use of integration by parts
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. ...
, resulting in a form which explicitly contains the physically interesting quantity. An adjoint state equation is introduced, including a new unknown variable.
The adjoint method formulates the gradient of a function towards its parameters in a constraint optimization form. By using the dual form of this constraint optimization problem, it can be used to calculate the gradient very fast. A nice property is that the number of computations is independent of the number of parameters for which you want the gradient.
The adjoint method is derived from the dual problem
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then t ...
and is used e.g. in the Landweber iteration The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, ...
method.
The name adjoint state method refers to the dual form of the problem, where the adjoint matrix
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
is used.
When the initial problem consists of calculating the product and must satisfy , the dual problem can be realized as calculating the product , where must satisfy .
And
is called the adjoint state vector.
General case
The original adjoint calculation method goes back to Jean Cea, with the use of the lagrangian of the optimization problem to compute the derivative of a functional with respect to a shape
A shape or figure is a graphical representation of an object or its external boundary, outline, or external surface, as opposed to other properties such as color, texture, or material type.
A plane shape or plane figure is constrained to lie ...
parameter.
For a state variable , an optimization variable , an objective functional is defined. The state variable is often implicitly dependant on through the (direct) state equation (usually the weak form of a partial differential equation), thus the considered objective is . Usually, one would be interested in calculating using the chain rule
In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
:
:
Unfortunately, the term is often very hard to differentiate analytically since the dependance is defined through an implicit equation. The lagrangian functional can be used as a workaround for this issue. Since the state equation can be considered as a constraint in the minimization of , the problem
:
:
has an associate lagrangian functional defined by
:
where is a Lagrange multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
or adjoint state variable and is an inner product
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
on . The method of Lagrange multipliers states that a solution to the problem has to be a stationary point
In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" in ...
of the lagrangian, namely
:
where is the Gateaux derivative
In mathematics, the Gateaux differential or Gateaux derivative is a generalization of the concept of directional derivative in differential calculus. Named after René Gateaux, a French mathematician who died young in World War I, it is defined ...
of with respect to in the direction . The last equation is equivalent to , the state equation, to which the solution is . The first equation is the so-called adjoint state equation,
:
because the operator involved is the adjoint operator of , . Resolving this equation yields the adjoint state .
The gradient of the quantity of interest with respect to is (the second equation with and ), thus it can be easily identified by subsequently resolving the direct and adjoint state equations. The process is even simpler when the operator is self-adjoint
In mathematics, and more specifically in abstract algebra, an element ''x'' of a *-algebra is self-adjoint if x^*=x. A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold.
A collection ''C'' of elements of a st ...
or symmetric since the direct and adjoint state equations differ only by their right-hand side.
Example: Linear case
In a real finite dimensional linear programming context, the objective function could be , for , and , and let the state equation be , with and .
The lagrangian function of the problem is , where .
The derivative of with respect to yields the state equation as shown before, and the state variable is . The derivative of with respect to is equivalent to the adjoint equation, which is, for every ,
:
Thus, we can write symbolically . The gradient would be
:
where is a third order tensor
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
, is the dyadic product In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra.
There are numerous ways to multiply two Euclidean vectors. The dot product takes in two v ...
between the direct and adjoint states and denotes a double tensor contraction
In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the natural pairing of a finite-dimensional vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tens ...
. It is assumed that has a known analytic expression that can be differentiated easily.
Numerical consideration for the self-adjoint case
If the operator was self-adjoint, , the direct state equation and the adjoint state equation would have the same left-hand side. In the goal of never inverting a matrix, which is a very slow process numerically, a LU decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
can be used instead to solve the state equation, in operations for the decomposition and operations for the resolution. That same decomposition can then be used to solve the adjoint state equation in only operations since the matrices are the same.
See also
* Backpropagation
In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions gener ...
* Adjoint equation An adjoint equation is a linear differential equation, usually derived from its primal equation using integration by parts. Gradient values with respect to a particular quantity of interest can be efficiently calculated by solving the adjoint equat ...
* Shape optimization
Shape optimization is part of the field of optimal control theory. The typical problem is to find the shape which is optimal in that it minimizes a certain cost functional while satisfying given constraints. In many cases, the functional being ...
* Method of Lagrange multipliers
References
External links
* A well written explanation by Errico
What is an adjoint Model?
* Another well written explanation with worked examples, written by Bradle
* More technical explanation:
review
of the adjoint-state method for computing the gradient of a functional with geophysical applications
* MIT cours
* MIT note
Numerical analysis
{{Mathapplied-stub