In numerical analysis, Broyden's method is a
quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
for
finding roots in variables. It was originally described by
C. G. Broyden in 1965.
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
for solving uses the
Jacobian matrix
In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
, , at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian only at the first iteration and to do rank-one updates at other iterations.
In 1979 Gay proved that when Broyden's method is applied to a linear system of size , it
terminates in steps, although like all quasi-Newton methods, it may not converge for nonlinear systems.
Description of the method
Solving single-variable equation
In the secant method, we replace the first derivative at with the
finite-difference approximation:
:
and proceed similar to
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
:
:
where is the iteration index.
Solving a system of nonlinear equations
Consider a system of nonlinear equations
:
where is a vector-valued function of vector :
:
:
For such problems, Broyden gives a generalization of the one-dimensional Newton's method, replacing the derivative with the
Jacobian . The Jacobian matrix is determined iteratively, based on the secant equation in the finite-difference approximation:
:
where is the iteration index. For clarity, let us define:
:
:
:
so the above may be rewritten as
:
The above equation is
underdetermined when is greater than one. Broyden suggests using the current estimate of the Jacobian matrix and improving upon it by taking the solution to the secant equation that is a minimal modification to :
:
This minimizes the following
Frobenius norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions).
Preliminaries
Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows ...
:
:
We may then proceed in the Newton direction:
:
Broyden also suggested using the
Sherman–Morrison formula
In mathematics, in particular linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix A and the outer product, u v^\textsf, of vectors u and v. The ...
to update directly the inverse of the Jacobian matrix:
:
This first method is commonly known as the "good Broyden's method".
A similar technique can be derived by using a slightly different modification to . This yields a second method, the so-called "bad Broyden's method" (but see):
:
This minimizes a different Frobenius norm:
:
Many other quasi-Newton schemes have been suggested in
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, where one seeks a maximum or minimum by finding the root of the first derivative (
gradient
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradi ...
in multiple dimensions). The Jacobian of the gradient is called
Hessian
A Hessian is an inhabitant of the German state of Hesse.
Hessian may also refer to:
Named from the toponym
*Hessian (soldier), eighteenth-century German regiments in service with the British Empire
**Hessian (boot), a style of boot
**Hessian f ...
and is symmetric, adding further constraints to its update.
Other members of the Broyden class
Broyden has defined not only two methods, but a whole class of methods. Other members of this class have been added by other authors.
* The
Davidon–Fletcher–Powell update is the only member of this class being published before the two members defined by Broyden.
* Schubert's or sparse Broyden algorithm – a modification for sparse Jacobian matrices.
* Klement (2014) – uses fewer iterations to solve many equation systems.
See also
*
Secant method
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation of ...
*
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
*
Quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
*
Newton's method in optimization
In calculus, Newton's method is an iterative method for finding the roots of a differentiable function , which are solutions to the equation . As such, Newton's method can be applied to the derivative of a twice-differentiable function to ...
*
Davidon–Fletcher–Powell formula The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher (mathematician), Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfie ...
*
Broyden–Fletcher–Goldfarb–Shanno (BFGS) method
References
Further reading
*
*
External links
Simple basic explanation: The story of the blind archer
{{Optimization algorithms, convex
Quasi-Newton methods