HOME

TheInfoList



OR:

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the
loss function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method. Since the updates of the BFGS curvature matrix do not require matrix inversion, its
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
is only \mathcal(n^), compared to \mathcal(n^) in
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints. The algorithm is named after Charles George Broyden, Roger Fletcher,
Donald Goldfarb Donald Goldfarb (born August 14, 1941 in New York City) is an American mathematician, best known for his works in mathematical optimization and numerical analysis. Goldfarb studied Chemical Engineering at Cornell University, earning a BSChE in 19 ...
and
David Shanno David F. Shanno (born April 19, 1938 – 2019) was an American mathematician, who specialized in mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a ...
.


Rationale

The optimization problem is to minimize f(\mathbf), where \mathbf is a vector in \mathbb^n, and f is a differentiable scalar function. There are no constraints on the values that \mathbf can take. The algorithm begins at an initial estimate for the optimal value \mathbf_0 and proceeds iteratively to get a better estimate at each stage. The search direction p''k'' at stage ''k'' is given by the solution of the analogue of the Newton equation: :B_k \mathbf_k = -\nabla f(\mathbf_k), where B_k is an approximation to the Hessian matrix, which is updated iteratively at each stage, and \nabla f(\mathbf_k) is the gradient of the function evaluated at x''k''. A line search in the direction p''k'' is then used to find the next point x''k''+1 by minimizing f(\mathbf_k + \gamma\mathbf_k) over the scalar \gamma > 0. The quasi-Newton condition imposed on the update of B_k is :B_ (\mathbf_ - \mathbf_k) = \nabla f(\mathbf_) - \nabla f(\mathbf_k). Let \mathbf_k = \nabla f(\mathbf_) - \nabla f(\mathbf_k) and \mathbf_k = \mathbf_ - \mathbf_k, then B_ satisfies :B_ \mathbf_k = \mathbf_k, which is the secant equation. The curvature condition \mathbf_k^\top \mathbf_k > 0 should be satisfied for B_ to be positive definite, which can be verified by pre-multiplying the secant equation with \mathbf_k^T. If the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point x''k''+1 satisfying the Wolfe conditions, which entail the curvature condition, using line search. Instead of requiring the full Hessian matrix at the point \mathbf_ to be computed as B_, the approximate Hessian at stage ''k'' is updated by the addition of two matrices: :B_ = B_k + U_k + V_k. Both U_k and V_k are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as
symmetric rank-one The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem. This upda ...
method, which does not guarantee the positive definiteness. In order to maintain the symmetry and positive definiteness of B_, the update form can be chosen as B_ = B_k + \alpha\mathbf\mathbf^\top + \beta\mathbf\mathbf^\top. Imposing the secant condition, B_ \mathbf_k = \mathbf_k . Choosing \mathbf = \mathbf_k and \mathbf = B_k \mathbf_k, we can obtain: :\alpha = \frac, :\beta = -\frac. Finally, we substitute \alpha and \beta into B_ = B_k + \alpha\mathbf\mathbf^\top + \beta\mathbf\mathbf^\top and get the update equation of B_: :B_ = B_k + \frac - \frac.


Algorithm

From an initial guess \mathbf_0 and an approximate Hessian matrix B_0 the following steps are repeated as \mathbf_k converges to the solution: # Obtain a direction \mathbf_k by solving B_k \mathbf_k = -\nabla f(\mathbf_k). # Perform a one-dimensional optimization ( line search) to find an acceptable stepsize \alpha_k in the direction found in the first step. If an exact line search is performed, then \alpha_k=\arg \min f(\mathbf_k+\alpha\mathbf_k) . In practice, an inexact line search usually suffices, with an acceptable \alpha_k satisfying Wolfe conditions. # Set \mathbf_k = \alpha_k \mathbf_k and update \mathbf_ = \mathbf_k + \mathbf_k. # \mathbf_k = . # B_ = B_k + \frac - \frac. f(\mathbf) denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient, , , \nabla f(\mathbf_k), , . If B_0 is initialized with B_0 = I, the first step will be equivalent to a gradient descent, but further steps are more and more refined by B_, the approximation to the Hessian. The first step of the algorithm is carried out using the inverse of the matrix B_k, which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving : B_^ = \left(I - \frac \right) B_^ \left(I - \frac \right) + \frac. This can be computed efficiently without temporary matrices, recognizing that B_k^ is symmetric, and that \mathbf_k^ B_k^ \mathbf_k and \mathbf_k^ \mathbf_k are scalars, using an expansion such as : B_^ = B_k^ + \frac - \frac. Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself: H_k \overset B_k^. From an initial guess \mathbf_0 and an approximate inverted Hessian matrix H_0 the following steps are repeated as \mathbf_k converges to the solution: # Obtain a direction \mathbf_k by solving \mathbf_k = -H_k \nabla f(\mathbf_k). # Perform a one-dimensional optimization ( line search) to find an acceptable stepsize \alpha_k in the direction found in the first step. If an exact line search is performed, then \alpha_k=\arg \min f(\mathbf_k+\alpha\mathbf_k) . In practice, an inexact line search usually suffices, with an acceptable \alpha_k satisfying Wolfe conditions. # Set \mathbf_k = \alpha_k \mathbf_k and update \mathbf_ = \mathbf_k + \mathbf_k. # \mathbf_k = . # H_ = H_k + \frac - \frac. In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or
confidence interval In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
s for the solution can be estimated from the
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when ad ...
of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.


Notable implementations

Notable open source implementations are: * ALGLIB implements BFGS and its limited-memory version in C++ and C# * GNU Octave uses a form of BFGS in its fsolve function, with trust region extensions. * The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2. * In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim(). * In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number. * In Julia, th
Optim.jl
package implements BFGS and L-BFGS as a solver option to the optimize() function (among other options). Notable proprietary implementations include: * The large scale nonlinear optimization software
Artelys Knitro Artelys Knitro is a commercial software package for solving large scale nonlinear mathematical optimization problems. KNITRO – (the original solver name) short for "Nonlinear Interior point Trust Region Optimization" (the "K" is silent) – was ...
implements, among others, both BFGS and L-BFGS algorithms. * In the MATLAB
Optimization Toolbox Optimization Toolbox is an optimization software package developed by MathWorks. It is an add-on product to MATLAB, and provides a library of solvers that can be used from the MATLAB environment. The toolbox was first released for MATLAB in 199 ...
, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale." *
Mathematica Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimizat ...
includes BFGS.


See also

* BHHH algorithm * Davidon–Fletcher–Powell formula * Gradient descent * L-BFGS * Levenberg–Marquardt algorithm * Nelder–Mead method *
Pattern search (optimization) Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or different ...
*
Quasi-Newton methods Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
*
Symmetric rank-one The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem. This upda ...


References


Further reading

* * * * * {{DEFAULTSORT:Broyden-Fletcher-Goldfarb-Shanno algorithm Optimization algorithms and methods