In the field of
mathematical optimization, Lagrangian relaxation is a
relaxation method
In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.
Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discr ...
which
approximates a difficult problem of
constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information.
The method penalizes violations of inequality constraints using a
Lagrange multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
, which imposes a cost on violations. These added costs are used instead of the strict inequality constraints in the optimization. In practice, this relaxed problem can often be solved more easily than the original problem.
The problem of maximizing the Lagrangian function of the dual variables (the Lagrangian multipliers) is the Lagrangian
dual problem
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then t ...
.
Mathematical description
Suppose we are given a
linear programming problem, with
and
, of the following form:
:
If we split the constraints in
such that
,
and
we may write the system:
:
We may introduce the constraint (2) into the objective:
:
If we let
be nonnegative
weights, we get penalized if we violate the constraint (2), and we are also rewarded if we satisfy the constraint strictly. The above
system is called the Lagrangian relaxation of our original problem.
The LR solution as a bound
Of particular use is the property that for any fixed set of
values, the optimal result to the Lagrangian relaxation problem will be no smaller than the optimal result to the original problem. To see this, let
be the optimal solution to the original problem, and let
be the optimal solution to the Lagrangian relaxation. We can then see that
:
The first inequality is true because
is feasible in the original problem and the second inequality is true because
is the optimal solution to the Lagrangian relaxation.
Iterating towards a solution of the original problem
The above inequality tells us that if we minimize the maximum value we obtain from the relaxed problem, we obtain a tighter limit on the objective value of our original problem. Thus we can address the original problem by instead exploring the partially dualized problem
:
where we define
as
:
A Lagrangian relaxation algorithm thus proceeds to explore the range of feasible
values while seeking to minimize the result returned by the inner
problem. Each value returned by
is a candidate upper bound to the problem, the smallest of which is kept as the best upper bound. If we additionally employ a heuristic, probably seeded by the
values returned by
, to find feasible solutions to the original problem, then we can iterate until the best upper bound and the cost of the best feasible solution converge to a desired tolerance.
Related methods
The
augmented Lagrangian method Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems a ...
is quite similar in spirit to the Lagrangian relaxation method, but adds an extra term, and updates the dual parameters
in a more principled manner. It was introduced in the 1970s and has been used extensively.
The
penalty method Penalty methods are a certain class of algorithms for solving constrained optimization problems.
A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution o ...
does not use dual variables but rather removes the constraints and instead penalizes deviations from the constraint. The method is conceptually simple but usually augmented Lagrangian methods are preferred in practice since the penalty method suffers from ill-conditioning issues.
References
Books
*
* Bertsekas, Dimitri P. (1999). ''Nonlinear Programming: 2nd Edition.'' Athena Scientific. .
*
*
*
*
*
*
Articles
*
*
*Bragin, Mikhail A.; Luh, Peter B.; Yan, Joseph H.; Yu, Nanpeng and Stern, Gary A. (2015). "Convergence of the Surrogate Lagrangian Relaxation Method," ''Journal of Optimization Theory and Applications.'' 164 (1): 173-201,
{{DEFAULTSORT:Lagrangian Relaxation
Convex optimization
Relaxation (approximation)