
The Gauss–Newton algorithm is used to solve
non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
for finding a
minimum of a non-linear
function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate
zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for
solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise, for instance, in
non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians
Carl Friedrich Gauss
Johann Carl Friedrich Gauss (; ; ; 30 April 177723 February 1855) was a German mathematician, astronomer, geodesist, and physicist, who contributed to many fields in mathematics and science. He was director of the Göttingen Observatory and ...
and
Isaac Newton
Sir Isaac Newton () was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author. Newton was a key figure in the Scientific Revolution and the Age of Enlightenment, Enlightenment that followed ...
, and first appeared in Gauss's 1809 work ''Theoria motus corporum coelestium in sectionibus conicis solem ambientum''.
Description
Given
functions
(often called residuals) of
variables
with
the Gauss–Newton algorithm
iteratively finds the value of
that minimize the sum of squares
[Björck (1996)]
Starting with an initial guess
for the minimum, the method proceeds by the iterations
where, if r and ''β'' are
column vectors, the entries of the
Jacobian matrix
In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
are
and the symbol
denotes the
matrix transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
.
At each iteration, the update
can be found by rearranging the previous equation in the following two steps:
*
*
With substitutions
,
, and
, this turns into the conventional matrix equation of form
, which can then be solved in a variety of methods (see
Notes).
If , the iteration simplifies to
which is a direct generalization of
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
in one dimension.
In data fitting, where the goal is to find the parameters
such that a given model function
best fits some data points
, the functions
are the
residuals:
Then, the Gauss–Newton method can be expressed in terms of the Jacobian
of the function
as
Note that
is the left
pseudoinverse of
.
Notes
The assumption in the algorithm statement is necessary, as otherwise the matrix
is not invertible and the normal equations cannot be solved (at least uniquely).
The Gauss–Newton algorithm can be derived by
linearly approximating the vector of functions ''r''
''i''. Using
Taylor's theorem
In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the k-th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation a ...
, we can write at every iteration:
with
. The task of finding
minimizing the sum of squares of the right-hand side; i.e.,
is a
linear least-squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.
The normal equations are ''n'' simultaneous linear equations in the unknown increments
. They may be solved in one step, using
Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
, or, better, the
QR factorization of
. For large systems, an
iterative method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
, such as the
conjugate gradient
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an it ...
method, may be more efficient. If there is a linear dependence between columns of J
r, the iterations will fail, as
becomes singular.
When
is complex
the conjugate form should be used:
.
Example

In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
In a biology experiment studying the relation between substrate concentration and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
It is desired to find a curve (model function) of the form
that fits best the data in the least-squares sense, with the parameters
and
to be determined.
Denote by
and
the values of and rate respectively, with
. Let
and
. We will find
and
such that the sum of squares of the residuals
is minimized.
The Jacobian
of the vector of residuals
with respect to the unknowns
is a
matrix with the
-th row having the entries
Starting with the initial estimates of
and
, after five iterations of the Gauss–Newton algorithm, the optimal values
and
are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data.
Convergence properties
The Gauss-Newton iteration is guaranteed to converge toward a local minimum point
under 4 conditions:
The functions
are twice continuously differentiable in an open convex set
, the Jacobian
is of full column rank, the initial iterate
is near
, and the local minimum value
is small. The convergence is quadratic if
.
It can be shown that the increment Δ is a
descent direction for , and, if the algorithm converges, then the limit is a
stationary point
In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of a function, graph of the function where the function's derivative is zero. Informally, it is a point where the ...
of . For large minimum value
, however, convergence is not guaranteed, not even
local convergence as in
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
, or convergence under the usual Wolfe conditions.
The rate of convergence of the Gauss–Newton algorithm can approach
quadratic. The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix
is
ill-conditioned
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
. For example, consider the problem with
equations and
variable, given by
For
,
is a local optimum. If
, then the problem is in fact linear and the method finds the optimum in one iteration. If , λ, < 1, then the method converges linearly and the error decreases asymptotically with a factor , λ, at every iteration. However, if , λ, > 1, then the method does not even converge locally.
Solving overdetermined systems of equations
The Gauss-Newton iteration
is an effective method for solving
overdetermined systems of equations in the form of
with
and
where
is the
Moore-Penrose inverse (also known as
pseudoinverse) of the
Jacobian matrix
In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
of
.
It can be considered an extension of
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
and enjoys the same local quadratic convergence
toward isolated regular solutions.
If the solution doesn't exist but the initial iterate
is near a point
at which the sum of squares
reaches a small local minimum, the Gauss-Newton iteration linearly converges to
. The point
is often called a
least squares
The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
solution of the overdetermined system.
Derivation from Newton's method
In what follows, the Gauss–Newton algorithm will be derived from
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.
The recurrence relation for Newton's method for minimizing a function ''S'' of parameters
is
where g denotes the
gradient vector
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The gra ...
of ''S'', and H denotes the
Hessian matrix
In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued Function (mathematics), function, or scalar field. It describes the local curvature of a functio ...
of ''S''.
Since
, the gradient is given by
Elements of the Hessian are calculated by differentiating the gradient elements,
, with respect to
:
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
where
are entries of the Jacobian J
r. Note that when the exact hessian is evaluated near an exact fit we have near-zero
, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as
These expressions are substituted into the recurrence relation above to obtain the operational equations
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected:
# The function values
are small in magnitude, at least around the minimum.
# The functions are only "mildly" nonlinear, so that
is relatively small in magnitude.
Improved versions
With the Gauss–Newton method the sum of squares of the residuals ''S'' may not decrease at every iteration. However, since Δ is a descent direction, unless
is a stationary point, it holds that
for all sufficiently small
. Thus, if divergence occurs, one solution is to employ a fraction
of the increment vector Δ in the updating formula:
In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective function ''S''. An optimal value for
can be found by using a
line search algorithm, that is, the magnitude of
is determined by finding the value that minimizes ''S'', usually using a
direct search method in the interval
or a
backtracking line search such as
Armijo-line search. Typically,
should be chosen such that it satisfies the
Wolfe conditions or the
Goldstein conditions.
In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the
Levenberg–Marquardt algorithm, a
trust region method.
The normal equations are modified in such a way that the increment vector is rotated towards the direction of
steepest descent
Gradient descent is a method for unconstrained mathematical optimization. It is a :First order methods, first-order Iterative algorithm, iterative algorithm for minimizing a differentiable function, differentiable multivariate function.
The ide ...
,
where D is a positive diagonal matrix. Note that when D is the identity matrix I and
, then
, therefore the
direction of Δ approaches the direction of the negative gradient
.
The so-called Marquardt parameter
may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time
is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in ''S''. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of ''S'' then becomes a standard Gauss–Newton minimization.
Large-scale optimization
For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrix
is more
sparse than the approximate Hessian
. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as the
conjugate gradient method.
In order to make this kind of approach work, one needs at least an efficient method for computing the product
for some vector p. With
sparse matrix
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse ...
storage, it is in general practical to store the rows of
in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one defines c
''i'' as row ''i'' of the matrix
, the following simple relation holds:
so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited for
parallel computations. Note that every row c
''i'' is the gradient of the corresponding residual ''r''
''i''; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other.
Related algorithms
In a
quasi-Newton method, such as that due to
Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (
BFGS method) an estimate of the full Hessian
is built up numerically using first derivatives
only so that after ''n'' refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives is
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
Example implementations
Julia
The following implementation in
Julia provides one method which uses a provided Jacobian and another computing with
automatic differentiation.
"""
gaussnewton(r, J, β₀, maxiter, tol)
Perform Gauss–Newton optimization to minimize the residual function `r` with Jacobian `J` starting from `β₀`. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations.
"""
function gaussnewton(r, J, β₀, maxiter, tol)
β = copy(β₀)
for _ in 1:maxiter
Jβ = J(β);
Δ = -(Jβ' * Jβ) \ (Jβ' * r(β))
β += Δ
if sqrt(sum(abs2, Δ)) < tol
break
end
end
return β
end
import AbstractDifferentiation as AD, Zygote
backend = AD.ZygoteBackend() # other backends are available
"""
gaussnewton(r, β₀, maxiter, tol)
Perform Gauss–Newton optimization to minimize the residual function `r` starting from `β₀`. The relevant Jacobian is calculated using automatic differentiation. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations.
"""
function gaussnewton(r, β₀, maxiter, tol)
β = copy(β₀)
for _ in 1:maxiter
rβ, Jβ = AD.value_and_jacobian(backend, r, β)
Δ = -(Jβ * Jβ \ (Jβ * rβ)
β += Δ
if sqrt(sum(abs2, Δ)) < tol
break
end
end
return β
end
Notes
References
*
* .
*
External links
*
Probability, Statistics and Estimation' The algorithm is detailed and applied to the biology experiment discussed as an example in this article (page 84 with the uncertainties on the estimated values).
Implementations
Artelys Knitrois a non-linear solver with an implementation of the Gauss–Newton method. It is written in C and has interfaces to C++/C#/Java/Python/MATLAB/R.
{{DEFAULTSORT:Gauss-Newton algorithm
Optimization algorithms and methods
Least squares
Statistical algorithms