In
mathematical optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, the method of Lagrange multipliers is a strategy for finding the local
maxima and minima
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
of a
function
Function or functionality may refer to:
Computing
* Function key, a type of key on computer keyboards
* Function model, a structured representation of processes in a system
* Function object or functor or functionoid, a concept of object-oriente ...
subject to
equality constraints (i.e., subject to the condition that one or more
equation
In mathematics, an equation is a formula that expresses the equality of two expressions, by connecting them with the equals sign . The word ''equation'' and its cognates in other languages may have subtly different meanings; for example, in ...
s have to be satisfied exactly by the chosen values of the
variables). It is named after the mathematician
Joseph-Louis Lagrange
Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangia[derivative test
In calculus, a derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about ...](_blank)
of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function.
The method can be summarized as follows: in order to find the maximum or minimum of a function
subjected to the equality constraint
, form the Lagrangian function
:
and find the
stationary point
In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" inc ...
s of
considered as a function of
and the Lagrange multiplier
; this means that all
partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Part ...
s should be zero, including the partial derivative with respect to
. The solution corresponding to the original constrained optimization is always a
saddle point
In mathematics, a saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function ...
of the Lagrangian function,
which can be identified among the stationary points from the
definiteness
In linguistics, definiteness is a semantic feature of noun phrases, distinguishing between referents or senses that are identifiable in a given context (definite noun phrases) and those which are not (indefinite noun phrases). The prototypical d ...
of the
bordered Hessian matrix.
The great advantage of this method is that it allows the optimization to be solved without explicit
parameterization
In mathematics, and more specifically in geometry, parametrization (or parameterization; also parameterisation, parametrisation) is the process of finding parametric equations of a curve, a surface, or, more generally, a manifold or a variety, de ...
in terms of the constraints. As a result, the method of Lagrange multipliers is widely used to solve challenging constrained optimization problems. Further, the method of Lagrange multipliers is generalized by the
Karush–Kuhn–Tucker conditions
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be o ...
, which can also take into account inequality constraints of the form
for a given constant
.
Statement
The following is known as the Lagrange multiplier theorem.
Let
be the objective function,
be the constraints function, both belonging to
(that is, having continuous first derivatives). Let
be an optimal solution to the following optimization problem such that
(here
denotes the matrix of partial derivatives,