Broyden–Fletcher–Goldfarb–Shanno algorithm on:  
[Wikipedia]  
[Google]  
[Amazon]
In
numerical optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an
iterative method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
for solving unconstrained
nonlinear optimization problems. Like the related
Davidon–Fletcher–Powell method, BFGS determines the
descent direction by
preconditioning
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing ...
the
gradient
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
with curvature information. It does so by gradually improving an approximation to the
Hessian matrix
In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued Function (mathematics), function, or scalar field. It describes the local curvature of a functio ...
of the
loss function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized
secant method.
Since the updates of the BFGS curvature matrix do not require
matrix inversion
In linear algebra, an invertible matrix (''non-singular'', ''non-degenarate'' or ''regular'') is a square matrix that has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an ...
, its
computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
is only
, compared to
in
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
. Also in common use is
L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints. The BFGS matrix also admits a
compact representation, which makes it better suited for large constrained problems.
The algorithm is named after
Charles George Broyden,
Roger Fletcher,
Donald Goldfarb
Donald Goldfarb (born August 14, 1941 in New York City) is an American mathematician, best known for his works in mathematical optimization and numerical analysis.
Biography
Goldfarb studied Chemical Engineering at Cornell University, earning a B ...
and
David Shanno.
Rationale
The optimization problem is to minimize
, where
is a vector in
, and
is a differentiable scalar function. There are no constraints on the values that
can take.
The algorithm begins at an initial estimate
for the optimal value and proceeds iteratively to get a better estimate at each stage.
The
search direction p
''k'' at stage ''k'' is given by the solution of the analogue of the Newton equation:
:
where
is an approximation to the
Hessian matrix
In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued Function (mathematics), function, or scalar field. It describes the local curvature of a functio ...
at
, which is updated iteratively at each stage, and
is the gradient of the function evaluated at x
''k''. A
line search
In optimization, line search is a basic iterative approach to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. It first finds a descent direction along which the objective function f will be reduced, and then co ...
in the direction p
''k'' is then used to find the next point x
''k''+1 by minimizing
over the scalar
The quasi-Newton condition imposed on the update of
is
:
Let
and
, then
satisfies
:
,
which is the secant equation.
The curvature condition
should be satisfied for
to be positive definite, which can be verified by pre-multiplying the secant equation with
. If the function is not
strongly convex, then the condition has to be enforced explicitly e.g. by finding a point x
''k''+1 satisfying the
Wolfe conditions, which entail the curvature condition, using line search.
Instead of requiring the full Hessian matrix at the point
to be computed as
, the approximate Hessian at stage ''k'' is updated by the addition of two matrices:
:
Both
and
are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and
DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as
symmetric rank-one method, which does not guarantee the
positive definiteness In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
* Positive-definite bilinear form
* Positive-definite f ...
. In order to maintain the symmetry and positive definiteness of
, the update form can be chosen as
. Imposing the secant condition,
. Choosing
and
, we can obtain:
:
:
Finally, we substitute
and
into
and get the update equation of
:
:
Algorithm
Consider the following unconstrained optimization problem
) to find an acceptable stepsize
in the direction found in the first step. If an exact line search is performed, then
. In practice, an inexact line search usually suffices, with an acceptable
satisfying
Wolfe conditions.
# Set
and update
.
#
.
#
.
Convergence can be determined by observing the norm of the gradient; given some
, one may stop the algorithm when
If
is initialized with
, the first step will be equivalent to a
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
, but further steps are more and more refined by
, the approximation to the Hessian.
The first step of the algorithm is carried out using the inverse of the matrix
, which can be obtained efficiently by applying the
Sherman–Morrison formula
In linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of a "rank-1 update" to a matrix whose inverse has previously been computed. That is, given an invertible matrix A and the ...
to the step 5 of the algorithm, giving
:
This can be computed efficiently without temporary matrices, recognizing that
is symmetric,
and that
and
are scalars, using an expansion such as
:
Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself:
From an initial guess
and an approximate inverted Hessian matrix
the following steps are repeated as
converges to the solution:
# Obtain a direction
by solving
.
# Perform a one-dimensional optimization (
line search
In optimization, line search is a basic iterative approach to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. It first finds a descent direction along which the objective function f will be reduced, and then co ...
) to find an acceptable stepsize
in the direction found in the first step. If an exact line search is performed, then
. In practice, an inexact line search usually suffices, with an acceptable
satisfying
Wolfe conditions.
# Set
and update
.
#
.
#
.
In statistical estimation problems (such as
maximum likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
or Bayesian inference),
credible interval
In Bayesian statistics, a credible interval is an interval used to characterize a probability distribution. It is defined such that an unobserved parameter value has a particular probability \gamma to fall within it. For example, in an experime ...
s or
confidence intervals for the solution can be estimated from the
inverse of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.
Further developments
The BFGS update formula heavily relies on the curvature
being strictly positive and bounded away from zero.
This condition is satisfied when we perform a line search with Wolfe conditions on a convex target.
However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures.
This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search.
It is also possible to produce spurious values due to noise in the target.
In such cases, one of the so-called damped BFGS updates can be used (see ) which modify
and/or
in order to obtain a more robust update.
Notable implementations
Notable open source implementations are:
*
ALGLIB
ALGLIB is a cross-platform open source numerical analysis and data processing library. It can be used from several programming languages ( C++, C#, VB.NET, Python, Delphi, Java).
ALGLIB started in 1999 and has a long history of steady developm ...
implements BFGS and its limited-memory version in C++ and C#
*
GNU Octave
GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly ...
uses a form of BFGS in its
fsolve
function, with
trust region extensions.
* The
GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2.
* In
R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim().
* In
SciPy
SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing.
SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, fast Fourier ...
, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the
L-BFGS algorithms by setting the parameter L to a very large number. It is also one of the default methods used when running scipy.optimize.minimize with no constraints.
* In
Julia, th
Optim.jlpackage implements BFGS and L-BFGS as a solver option to the optimize() function (among other options).
Notable proprietary implementations include:
* The large scale nonlinear optimization software
Artelys Knitro implements, among others, both BFGS and L-BFGS algorithms.
* In the MATLAB
Optimization Toolbox, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale."
*
Mathematica
Wolfram (previously known as Mathematica and Wolfram Mathematica) is a software system with built-in libraries for several areas of technical computing that allows machine learning, statistics, symbolic computation, data manipulation, network ...
includes BFGS.
* LS-DYNA also uses BFGS to solve implicit Problems.
See also
*
BHHH algorithm
*
Davidon–Fletcher–Powell formula
*
Gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
*
L-BFGS
*
Levenberg–Marquardt algorithm
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least s ...
*
Nelder–Mead method
The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on ...
*
Pattern search (optimization)
Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or diffe ...
*
Quasi-Newton methods
In numerical analysis, a quasi-Newton method is an Iterative method, iterative numerical method used either to Root-finding algorithm, find zeroes or to Mathematical optimization, find local maxima and minima of functions via an iterative recurren ...
*
Symmetric rank-one
*
Compact quasi-Newton representation
References
Further reading
*
*
*
*
*
{{DEFAULTSORT:Broyden-Fletcher-Goldfarb-Shanno algorithm
Optimization algorithms and methods