HOME

TheInfoList



OR:

In
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, a descent direction is a vector \mathbf\in\mathbb R^n that, in the sense below, moves us closer towards a local minimum \mathbf^* of our objective function f:\mathbb R^n\to\mathbb R. Suppose we are computing \mathbf^* by an iterative method, such as
line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
. We define a descent direction \mathbf_k\in\mathbb R^n at the kth iterate to be any \mathbf_k such that \langle\mathbf_k,\nabla f(\mathbf_k)\rangle < 0, where \langle , \rangle denotes the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
. The motivation for such an approach is that small steps along \mathbf_k guarantee that \displaystyle f is reduced, by
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is th ...
. Using this definition, the negative of a non-zero gradient is always a descent direction, as \langle -\nabla f(\mathbf_k), \nabla f(\mathbf_k) \rangle = -\langle \nabla f(\mathbf_k), \nabla f(\mathbf_k) \rangle < 0 . Numerous methods exist to compute descent directions, all with differing merits. For example, one could use
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
or the conjugate gradient method. More generally, if P is a positive definite matrix, then p_k = -P \nabla f(x_k) is a descent direction at x_k. This generality is used in preconditioned gradient descent methods.


See also

*
Directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity ...


References

{{DEFAULTSORT:Descent Direction Mathematical optimization