HOME

TheInfoList



OR:

Subgradient methods are
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s for solving
convex minimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization probl ...
problems. Originally developed by
Naum Z. Shor Naum Zuselevich Shor (russian: Наум Зуселевич Шор) (1 January 1937 – 26 February 2006) was a Soviet and Ukrainian mathematician specializing in optimization. He made significant contributions to nonlinear and stochastic prog ...
and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent. Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks. In recent years, some interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage. Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.


Classical subgradient rules

Let f:\mathbb^n \to \mathbb be a
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (the set of poi ...
with domain \mathbb^n. A classical subgradient method iterates :x^ = x^ - \alpha_k g^ \ where g^ denotes ''any''
subgradient In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connecti ...
of f \ at x^ \ , and x^ is the k^ iterate of x. If f \ is differentiable, then its only subgradient is the gradient vector \nabla f itself. It may happen that -g^ is not a descent direction for f \ at x^. We therefore maintain a list f_ \ that keeps track of the lowest objective function value found so far, i.e. :f_^ = \min\.


Step size rules

Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergence
proof Proof most often refers to: * Proof (truth), argument or sufficient evidence for the truth of a proposition * Alcohol proof, a measure of an alcoholic drink's strength Proof may also refer to: Mathematics and formal logic * Formal proof, a c ...
s are known: *Constant step size, \alpha_k = \alpha. *Constant step length, \alpha_k = \gamma/\lVert g^ \rVert_2, which gives \lVert x^ - x^ \rVert_2 = \gamma. *Square summable but not summable step size, i.e. any step sizes satisfying \alpha_k\geq0,\qquad\sum_^\infty \alpha_k^2 < \infty,\qquad \sum_^\infty \alpha_k = \infty. *Nonsummable diminishing, i.e. any step sizes satisfying \alpha_k \geq 0,\qquad \lim_ \alpha_k = 0,\qquad \sum_^\infty \alpha_k = \infty. *Nonsummable diminishing step lengths, i.e. \alpha_k = \gamma_k/\lVert g^ \rVert_2, where \gamma_k \geq 0,\qquad \lim_ \gamma_k = 0,\qquad \sum_^\infty \gamma_k = \infty. For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas and by Bertsekas, Nedic, and Ozdaglar.


Convergence results

For constant step-length and scaled subgradients having
Euclidean norm Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean ...
equal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is :\lim_ f_^ - f^* <\epsilon by a result of Shor. These classical subgradient methods have poor performance and are no longer recommended for general use. However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand.


Subgradient-projection and bundle methods

During the 1970s,
Claude Lemaréchal Claude Lemaréchal is a French applied mathematician, and former senior researcher (''directeur de recherche'') at INRIA near Grenoble, France. In mathematical optimization, Claude Lemaréchal is known for his work in numerical methods for non ...
and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization. The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel. Contemporary bundle-methods often use "
level Level or levels may refer to: Engineering *Level (instrument), a device used to measure true horizontal or relative heights *Spirit level, an instrument designed to indicate whether a surface is horizontal or vertical * Canal pound or level *Reg ...
control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.


Constrained optimization


Projected subgradient

One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem :minimize f(x) \ subject to :x\in\mathcal where \mathcal is a
convex set In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex ...
. The projected subgradient method uses the iteration :x^ = P \left(x^ - \alpha_k g^ \right) where P is projection on \mathcal and g^ is any subgradient of f \ at x^.


General constraints

The subgradient method can be extended to solve the inequality constrained problem :minimize f_0(x) \ subject to :f_i (x) \leq 0,\quad i = 1,\dots,m where f_i are convex. The algorithm takes the same form as the unconstrained case :x^ = x^ - \alpha_k g^ \ where \alpha_k>0 is a step size, and g^ is a subgradient of the objective or one of the constraint functions at x. \ Take :g^ = \begin \partial f_0 (x) & \text f_i(x) \leq 0 \; \forall i = 1 \dots m \\ \partial f_j (x) & \text j \text f_j(x) > 0 \end where \partial f denotes the
subdifferential In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connect ...
of f \ . If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.


References


Further reading

* * * * *


External links


EE364A
an
EE364B
Stanford's convex optimization course sequence. {{optimization algorithms, convex Optimization algorithms and methods Convex optimization