Subgradient methods are
iterative method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived fr ...
s for solving
convex minimization
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization pro ...
problems. Originally developed by
Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of
steepest descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
.
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
In recent years, some
interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.
Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
Classical subgradient rules
Let
be a
convex function
In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
with domain
. A classical subgradient method iterates
:
where
denotes ''any''
subgradient of
at
, and
is the
iterate of
. If
is differentiable, then its only subgradient is the gradient vector
itself.
It may happen that
is not a descent direction for
at
. We therefore maintain a list
that keeps track of the lowest objective function value found so far, i.e.
:
Step size rules
Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergence
proofs are known:
*Constant step size,
*Constant step length,
, which gives
*Square summable but not summable step size, i.e. any step sizes satisfying
*Nonsummable diminishing, i.e. any step sizes satisfying
*Nonsummable diminishing step lengths, i.e.
, where
For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas and by Bertsekas, Nedic, and Ozdaglar.
Convergence results
For constant step-length and scaled subgradients having
Euclidean norm
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean s ...
equal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is
:
by a result of
Shor.
These classical subgradient methods have poor performance and are no longer recommended for general use.
However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand.
Subgradient-projection and bundle methods
During the 1970s,
Claude Lemaréchal
Claude Lemaréchal is a French applied mathematician, and former senior researcher (''directeur de recherche'') at INRIA near Grenoble, France.
In mathematical optimization, Claude Lemaréchal is known for his work in numerical methods for nonlin ...
and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization. The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel.
[
] Contemporary bundle-methods often use "
level
Level or levels may refer to:
Engineering
*Level (instrument), a device used to measure true horizontal or relative heights
*Spirit level, an instrument designed to indicate whether a surface is horizontal or vertical
*Canal pound or level
*Regr ...
control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.
[
][
]
Constrained optimization
Projected subgradient
One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem
:minimize
subject to
:
where
is a
convex set
In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex r ...
. The projected subgradient method uses the iteration
:
where
is projection on
and
is any subgradient of
at
General constraints
The subgradient method can be extended to solve the inequality constrained problem
:minimize
subject to
:
where
are convex. The algorithm takes the same form as the unconstrained case
:
where
is a step size, and
is a subgradient of the objective or one of the constraint functions at
Take
:
where
denotes the
subdifferential
In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection ...
of
. If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.
References
Further reading
*
*
*
*
*
External links
EE364Aan
EE364B Stanford's convex optimization course sequence.
{{optimization algorithms, convex
Optimization algorithms and methods
Convex optimization