In
numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an
iterative method for solving unconstrained
nonlinear optimization problems. Like the related
Davidon–Fletcher–Powell method, BFGS determines the
descent direction by
preconditioning the
gradient with curvature information. It does so by gradually improving an approximation to the
Hessian matrix of the
loss function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized
secant method.
Since the updates of the BFGS curvature matrix do not require
matrix inversion, its
computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
is only
, compared to
in
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. Also in common use is
L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.
The algorithm is named after
Charles George Broyden,
Roger Fletcher,
Donald Goldfarb
Donald Goldfarb (born August 14, 1941 in New York City) is an American mathematician, best known for his works in mathematical optimization and numerical analysis.
Goldfarb studied Chemical Engineering at Cornell University, earning a BSChE in 19 ...
and
David Shanno
David F. Shanno (born April 19, 1938 – 2019) was an American mathematician, who specialized in mathematical optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a ...
.
Rationale
The optimization problem is to minimize
, where
is a vector in
, and
is a differentiable scalar function. There are no constraints on the values that
can take.
The algorithm begins at an initial estimate for the optimal value
and proceeds iteratively to get a better estimate at each stage.
The
search direction p
''k'' at stage ''k'' is given by the solution of the analogue of the Newton equation:
:
where
is an approximation to the
Hessian matrix, which is updated iteratively at each stage, and
is the gradient of the function evaluated at x
''k''. A
line search in the direction p
''k'' is then used to find the next point x
''k''+1 by minimizing
over the scalar
The quasi-Newton condition imposed on the update of
is
:
Let
and
, then
satisfies
:
,
which is the secant equation.
The curvature condition
should be satisfied for
to be positive definite, which can be verified by pre-multiplying the secant equation with
. If the function is not
strongly convex, then the condition has to be enforced explicitly e.g. by finding a point x
''k''+1 satisfying the
Wolfe conditions, which entail the curvature condition, using line search.
Instead of requiring the full Hessian matrix at the point
to be computed as
, the approximate Hessian at stage ''k'' is updated by the addition of two matrices:
:
Both
and
are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and
DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as
symmetric rank-one The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian)
based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem.
This upda ...
method, which does not guarantee the
positive definiteness. In order to maintain the symmetry and positive definiteness of
, the update form can be chosen as
. Imposing the secant condition,
. Choosing
and
, we can obtain:
:
:
Finally, we substitute
and
into
and get the update equation of
:
:
Algorithm
From an initial guess
and an approximate Hessian matrix
the following steps are repeated as
converges to the solution:
# Obtain a direction
by solving
.
# Perform a one-dimensional optimization (
line search) to find an acceptable stepsize
in the direction found in the first step. If an exact line search is performed, then
. In practice, an inexact line search usually suffices, with an acceptable
satisfying
Wolfe conditions.
# Set
and update
.
#
.
#
.
denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient,
. If
is initialized with
, the first step will be equivalent to a
gradient descent, but further steps are more and more refined by
, the approximation to the Hessian.
The first step of the algorithm is carried out using the inverse of the matrix
, which can be obtained efficiently by applying the
Sherman–Morrison formula to the step 5 of the algorithm, giving
:
This can be computed efficiently without temporary matrices, recognizing that
is symmetric,
and that
and
are scalars, using an expansion such as
:
Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself:
From an initial guess
and an approximate inverted Hessian matrix
the following steps are repeated as
converges to the solution:
# Obtain a direction
by solving
.
# Perform a one-dimensional optimization (
line search) to find an acceptable stepsize
in the direction found in the first step. If an exact line search is performed, then
. In practice, an inexact line search usually suffices, with an acceptable
satisfying
Wolfe conditions.
# Set
and update
.
#
.
#
.
In statistical estimation problems (such as
maximum likelihood or Bayesian inference),
credible intervals or
confidence interval
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
s for the solution can be estimated from the
inverse
Inverse or invert may refer to:
Science and mathematics
* Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence
* Additive inverse (negation), the inverse of a number that, when ad ...
of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.
Notable implementations
Notable open source implementations are:
*
ALGLIB implements BFGS and its limited-memory version in C++ and C#
*
GNU Octave uses a form of BFGS in its
fsolve
function, with
trust region extensions.
* The
GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2.
* In
R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim().
* In
SciPy, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the
L-BFGS algorithms by setting the parameter L to a very large number.
* In
Julia, th
Optim.jlpackage implements BFGS and L-BFGS as a solver option to the optimize() function (among other options).
Notable proprietary implementations include:
* The large scale nonlinear optimization software
Artelys Knitro
Artelys Knitro is a commercial software package for solving large scale nonlinear mathematical optimization problems.
KNITRO – (the original solver name) short for "Nonlinear Interior point Trust Region Optimization" (the "K" is silent) – was ...
implements, among others, both BFGS and L-BFGS algorithms.
* In the MATLAB
Optimization Toolbox
Optimization Toolbox is an optimization software package developed by MathWorks. It is an add-on product to MATLAB, and provides a library of solvers that can be used from the MATLAB environment. The toolbox was first released for MATLAB in 199 ...
, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale."
*
Mathematica
Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimizat ...
includes BFGS.
See also
*
BHHH algorithm
*
Davidon–Fletcher–Powell formula
*
Gradient descent
*
L-BFGS
*
Levenberg–Marquardt algorithm
*
Nelder–Mead method
*
Pattern search (optimization)
Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or different ...
*
Quasi-Newton methods Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
*
Symmetric rank-one The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian)
based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem.
This upda ...
References
Further reading
*
*
*
*
*
{{DEFAULTSORT:Broyden-Fletcher-Goldfarb-Shanno algorithm
Optimization algorithms and methods